<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.csi.cuny.edu/cunyhpc/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=James</id>
	<title>HPCC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.csi.cuny.edu/cunyhpc/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=James"/>
	<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php/Special:Contributions/James"/>
	<updated>2026-05-16T15:40:03Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.38.4</generator>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=170</id>
		<title>Applications</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=170"/>
		<updated>2022-11-18T17:55:24Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div class=&amp;quot;noautonum&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Application&lt;br /&gt;
!Installed Version&lt;br /&gt;
!Current Version&lt;br /&gt;
!Dependencies&lt;br /&gt;
|-&lt;br /&gt;
|ABINIT&lt;br /&gt;
|8.2.2&lt;br /&gt;
|9.6.2&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ASE&lt;br /&gt;
|3.18.0&lt;br /&gt;
|Git commit f785806&lt;br /&gt;
|Python 3.6+, NumPy, SciPy&lt;br /&gt;
Optional: Matplotlib, tkinter, Flask&lt;br /&gt;
|-&lt;br /&gt;
|G-PhoCS&lt;br /&gt;
|1.3&lt;br /&gt;
|1.3&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|GMP&lt;br /&gt;
|6.1.2&lt;br /&gt;
|6.1.2&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|GPAW&lt;br /&gt;
|19.8.1&lt;br /&gt;
|22.8&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|Gerris&lt;br /&gt;
|20131206&lt;br /&gt;
|20131206&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|HDF5&lt;br /&gt;
|1.8.17/1.10.1&lt;br /&gt;
|1.13.3&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|LAME&lt;br /&gt;
|3.100&lt;br /&gt;
|3.100&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|XML-Parser&lt;br /&gt;
|2.44_01&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|abyss&lt;br /&gt;
|1.3.7 / 1.5.7&lt;br /&gt;
|2.3.5&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|adcirc&lt;br /&gt;
|50_99_07&lt;br /&gt;
|167f586&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|adda&lt;br /&gt;
|1.2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|anvio&lt;br /&gt;
|2.0.2&lt;br /&gt;
|7.1&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|armadillo&lt;br /&gt;
|9.2.7 / 9.200.7&lt;br /&gt;
|11.4.x&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|arpack&lt;br /&gt;
|3.1.5&lt;br /&gt;
|Now maintained at arpack-ng&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|augustus&lt;br /&gt;
|3.2.2&lt;br /&gt;
|3.5.0&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|autodock&lt;br /&gt;
|4.2.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|autodock_vina&lt;br /&gt;
|1.1.2&lt;br /&gt;
|1.2.0&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamm&lt;br /&gt;
|2.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamova&lt;br /&gt;
|1.02&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamtools&lt;br /&gt;
|2.30 / 2.5.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|basilisk&lt;br /&gt;
|v2019&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bayescan&lt;br /&gt;
|2.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|beast&lt;br /&gt;
|1.8.4 / 2.4.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|beast2&lt;br /&gt;
|2.6.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bedops&lt;br /&gt;
|2.4.40&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bedtools&lt;br /&gt;
|2.30.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bigwig&lt;br /&gt;
|011921&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|biobwa&lt;br /&gt;
|0.7.17&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bioperl&lt;br /&gt;
|1.6.923&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|blast&lt;br /&gt;
|2.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bowtie2&lt;br /&gt;
|2.2.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bpp&lt;br /&gt;
|4.4.0 / 4.4.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cblas&lt;br /&gt;
|1.20.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cmaq&lt;br /&gt;
|5.3.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cmdstan&lt;br /&gt;
|2.21.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cp2k&lt;br /&gt;
|2.5.1 / 3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cryoSPARC&lt;br /&gt;
|2.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|diamond&lt;br /&gt;
|0.7.9&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|doxygen&lt;br /&gt;
|2014&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|dualSP&lt;br /&gt;
|4.2 / 4.3_beta&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eautils&lt;br /&gt;
|02072017&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eclipse_ptp&lt;br /&gt;
|8.1.2 / 9.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eigen&lt;br /&gt;
|3.2.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|emacs&lt;br /&gt;
|25.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|exabayes&lt;br /&gt;
|1.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|examl&lt;br /&gt;
|3.0.17&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fdppdiv&lt;br /&gt;
|20140728&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fds_smv&lt;br /&gt;
|6.1.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ferret&lt;br /&gt;
|6.96&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|freetype&lt;br /&gt;
|2.5.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fsplit&lt;br /&gt;
|092214&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ga&lt;br /&gt;
|5.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gamess-us&lt;br /&gt;
|4.14.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gamma&lt;br /&gt;
|20111212&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gap&lt;br /&gt;
|4.6.5 / 4.7.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gatk&lt;br /&gt;
|3.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gdc&lt;br /&gt;
|1.0.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gerris&lt;br /&gt;
|09.30.16_EPYC / 12.06.13_BM / 12.06.13_EPYC / 12.06.13_PNZ&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ghc&lt;br /&gt;
|7.8.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|glog&lt;br /&gt;
|0.3.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gnuplot&lt;br /&gt;
|4.6.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gpaw&lt;br /&gt;
|1.2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gperf&lt;br /&gt;
|3.0.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gpu-blast&lt;br /&gt;
|2.2.28&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gromacs&lt;br /&gt;
|5.1.4 / 5.1.5 / 2020.1_mpi&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|help2man&lt;br /&gt;
|1.47.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|hoomd&lt;br /&gt;
|1.3.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|hpl&lt;br /&gt;
|2.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|humann&lt;br /&gt;
|0.7.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ima2p&lt;br /&gt;
|071717&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ipyrad&lt;br /&gt;
|0.7.13&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|itasser&lt;br /&gt;
|4.2 / 5.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|kokkos&lt;br /&gt;
|2.9.00&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|lammps&lt;br /&gt;
|03.03.20&lt;br /&gt;
|06.02.22&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|lmdb&lt;br /&gt;
|20160810&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ls-dyna&lt;br /&gt;
|6.0.0 / 7.1.2 / 8.0.0 / 8.1.0 / 9.1.0 / 10.0.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|maker&lt;br /&gt;
|2.31.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mathematica&lt;br /&gt;
|10.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|matlab&lt;br /&gt;
|R2022a&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|meme&lt;br /&gt;
|4.11.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mercurial&lt;br /&gt;
|2.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|metaphlan2&lt;br /&gt;
|2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|migrate&lt;br /&gt;
|4.2.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|minpath&lt;br /&gt;
|1.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mpest&lt;br /&gt;
|1.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mpfr&lt;br /&gt;
|3.1.2 / 3.1.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mrbayes&lt;br /&gt;
|3.2.7a&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|msbayes&lt;br /&gt;
|20140305&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mumps&lt;br /&gt;
|5.1.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|muscle&lt;br /&gt;
|3.8.31&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mutect&lt;br /&gt;
|1.1.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|namd&lt;br /&gt;
|2.9 / 2.12 / 2.13 / 2.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nciplot&lt;br /&gt;
|4.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ncview&lt;br /&gt;
|2.1.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nek5000&lt;br /&gt;
|19.0_epyc&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|netCDF&lt;br /&gt;
|4.2 / 4.3.2 / 4.4.1 / 4.4.3 / 4.5.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nmrpipe&lt;br /&gt;
|20170909&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nrlmol&lt;br /&gt;
|10.0.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nvtop&lt;br /&gt;
|1.0.0_git&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nwchem&lt;br /&gt;
|6.6 / 6.8.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|octopus&lt;br /&gt;
|4.1.1 / 4.1.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|openblas&lt;br /&gt;
|2.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|opencv&lt;br /&gt;
|3.1.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|openeye&lt;br /&gt;
|latest&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|openfoam&lt;br /&gt;
|2.3.0 / 5.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|openmm&lt;br /&gt;
|5.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|opensees&lt;br /&gt;
|5921 / 5955&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|openshmem&lt;br /&gt;
|1.0h&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|openwatcom&lt;br /&gt;
|1.9&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|orca&lt;br /&gt;
|5.0.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|pargap&lt;br /&gt;
|1.3.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|perl&lt;br /&gt;
|5.24.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|pfft&lt;br /&gt;
|040415&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|phase&lt;br /&gt;
|2.1.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|plasma&lt;br /&gt;
|2.5 / 2.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|plumed&lt;br /&gt;
|2.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|pyYAML&lt;br /&gt;
|3.10&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|pyrad&lt;br /&gt;
|3.0.66&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qespresso&lt;br /&gt;
|6.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qiime&lt;br /&gt;
|1.9.1_FULL_P2.7 / 1.9.1_FULL_P3.6 / 1.9.1 / 2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qiime2&lt;br /&gt;
|2020.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|quantumEspresso&lt;br /&gt;
|6.6_cpu&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qvina&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|raxml&lt;br /&gt;
|0.1.0_ng / 8.2.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|relion&lt;br /&gt;
|1.3_gcc / 3.0 / 3.6_apollo_gnu / 3.6_apollo_intel_i7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|repast&lt;br /&gt;
|2.1.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rtfbsDB&lt;br /&gt;
|012021&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sage&lt;br /&gt;
|6.2 / 7.5.1 / 8.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|samtools&lt;br /&gt;
|1.3.1 / 1.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scons&lt;br /&gt;
|2.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|singularity&lt;br /&gt;
|2.4.6 / 3.1.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|snap&lt;br /&gt;
|11.29.13 / 1.18_beta&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sparskit&lt;br /&gt;
|030515&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|splatche&lt;br /&gt;
|2.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sprint&lt;br /&gt;
|1.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sqlite&lt;br /&gt;
|3.12.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sra&lt;br /&gt;
|2.5.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|stacks&lt;br /&gt;
|1.23&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|stan&lt;br /&gt;
|2.22.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|stata&lt;br /&gt;
|12&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|stem&lt;br /&gt;
|2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|structurama&lt;br /&gt;
|2.2.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|swig&lt;br /&gt;
|4.0.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|tau&lt;br /&gt;
|2.30&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|tcl&lt;br /&gt;
|8.6.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|tensorflow&lt;br /&gt;
|r0.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|texinfo&lt;br /&gt;
|6.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|tinker&lt;br /&gt;
|6.2 / 8.8.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|tmux&lt;br /&gt;
|2.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|trinity&lt;br /&gt;
|2.1.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|usearch&lt;br /&gt;
|5.2.236 / 6.0.307 / 6.1.544 / 7.0.1090 / 8.0.1517 / 8.0.1623 / 9.0.2124 &lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|util-linux&lt;br /&gt;
|2.29&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|valgrind&lt;br /&gt;
|3.10.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|vmd&lt;br /&gt;
|1.9.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|vsearch&lt;br /&gt;
|1.0.5 / 1.10.2 / 1.11.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|wrf&lt;br /&gt;
|3.5.0 / 4.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|xmgrace&lt;br /&gt;
|5.1.23 / 5.1.25&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|xplor&lt;br /&gt;
|2.38 / 2.39&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
__NOTOC__&amp;lt;/div&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;This is an index of available applications sorted by their academic relevance, as well as alphabetically.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For information about using modules to run your applications go to [[Using Modules To Run Your Applications]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==Computational Physics and Computational Chemistry== &lt;br /&gt;
Applications in this section use classical mechanics, quantum mechanics and thermodynamics and are applied in simulation studies of fundamental properties of atoms, molecules, and chemical reactions.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Computational Biology== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for ‘omics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from &lt;br /&gt;
genetic data, using differences in allele frequencies between populations.  BayeScan is &lt;br /&gt;
based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an&lt;br /&gt;
island model in which subpopulation allele frequencies are correlated through a common &lt;br /&gt;
migrant gene pool from which they differ in varying degrees.  The difference in allele frequency &lt;br /&gt;
between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios &lt;br /&gt;
where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html]&lt;br /&gt;
and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our &lt;br /&gt;
installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the &lt;br /&gt;
European Bioinformatics Institute, Cambridge, UK.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[VELVET]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Computational Genomics, Proteonics, Microbiomics, Genetics==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --&amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=2&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GENOMEPOP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Math, Engineering, Computer Science== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenSEES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSEES, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSEES is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Economics, Business, Statistics, Analytics==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot;&amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;R-devel&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
R is a language and environment for statistical computing and graphics. R-devel provides both core R userspace and all R development components.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:*data entry, retrieval, management, and mining&lt;br /&gt;
:*report writing and graphics&lt;br /&gt;
:*statistical analysis&lt;br /&gt;
:*business planning, forecasting, and decision support&lt;br /&gt;
:*operations research and project management&lt;br /&gt;
:*quality improvement&lt;br /&gt;
:*applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:*platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==General Development Systems==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Coming soon.&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==Tools, Libraries, Compilers==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:*Karle under /usr/bin/gnuplot&lt;br /&gt;
:*Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/ gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
* Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
*Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Alphabetical List ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==A== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for genomics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==B== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from genetic data, using differences in allele frequencies between populations.  BayeScan is based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an island model in which subpopulation allele frequencies are correlated through a common migrant gene pool from which they differ in varying degrees.  The difference in allele frequency between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html] and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==C== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==D== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==E== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N &amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SLURM -l select=1:ncpus=2 &lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==F== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==G== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:*Karle under /usr/bin/gnuplot&lt;br /&gt;
:*Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/ gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GenomePop2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==H==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&lt;br /&gt;
HOOMD&#039;s object-oriented design patterns make it both versatile and expandable. Various types of potentials, integration methods&lt;br /&gt;
and file formats are currently supported, and more are added with each release. The code is available and open source, so anyone&lt;br /&gt;
can write a plugin or change the source to add additional functionality.  Simulations are configured and run using simple python&lt;br /&gt;
scripts, allowing complete control over the force field choice, integrator, all parameters, how many time steps are run, etc.&lt;br /&gt;
The scripting system is designed to be as simple as possible to the non-programmer.&lt;br /&gt;
&lt;br /&gt;
The HOOMD development effort is led by the Glotzer group at the University of Michigan, but many groups from different universities&lt;br /&gt;
have contributed code that is now part of the HOOMD main package, see the credits page for the full list. The HOOMD website and&lt;br /&gt;
documentation are available here [http://codeblue.umich.edu/hoomd-blue/about.html]. More information about our installation can be&lt;br /&gt;
found here [[HOOMD]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==I==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==J==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==L==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== M==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
* Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==N==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==O== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenSees&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSees, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSees is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==P== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
*Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Installing Python packages&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Users may install python packages/modules in their own space.  Many packages available in Python repositories can be installed easily with PIP manager, which is available  in any of Anaconda and Miniconda builds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Users must remember that using PIP without first loading the module for python will cause the installed modules to match system python on login node only. However the python interpreter available (after login module) on all nodes is installed in /share/usr/compilers/python space. Thus when installing packages in user space it is very important to follow the procedure outlined below. The given example demonstrates how users can install package &amp;quot;guppy&amp;quot; in their own space:&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Anaconda build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/2.7.13_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Anaconda build&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/3.6.0_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Miniconda&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda2&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Miniconda 3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda3&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check if the package is properly installed type:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip list | grep guppy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==Q== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==R== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot;&amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==S== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:*data entry, retrieval, management, and mining&lt;br /&gt;
:*report writing and graphics&lt;br /&gt;
:*statistical analysis&lt;br /&gt;
:*business planning, forecasting, and decision support&lt;br /&gt;
:*operations research and project management&lt;br /&gt;
:*quality improvement&lt;br /&gt;
:*applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==U== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==V== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==W== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==X== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=169</id>
		<title>Applications</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=169"/>
		<updated>2022-11-18T15:49:59Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div class=&amp;quot;noautonum&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Application&lt;br /&gt;
!Installed Version&lt;br /&gt;
!Current Version&lt;br /&gt;
!Dependencies&lt;br /&gt;
|-&lt;br /&gt;
|ABINIT&lt;br /&gt;
|8.2.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ASE&lt;br /&gt;
|3.18.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|G-PhoCS&lt;br /&gt;
|1.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|GMP&lt;br /&gt;
|6.1.2-GCCcore-6.4.0/ 7.3.0/ 8.2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|GPAW&lt;br /&gt;
|19.8.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|Gerris&lt;br /&gt;
|20131206&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|HDF5&lt;br /&gt;
|1.8.17/1.10.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|LAME&lt;br /&gt;
|3.100&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|XML-Parser&lt;br /&gt;
|2.44_01&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|abyss&lt;br /&gt;
|1.3.7 / 1.5.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|adcirc&lt;br /&gt;
|50_99_07&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|adda&lt;br /&gt;
|1.2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|anvio&lt;br /&gt;
|2.0.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|armadillo&lt;br /&gt;
|9.2.7 / 9.200.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|arpack&lt;br /&gt;
|3.1.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|augustus&lt;br /&gt;
|3.2.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|autodock&lt;br /&gt;
|4.2.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|autodock_vina&lt;br /&gt;
|1.1.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamm&lt;br /&gt;
|2.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamova&lt;br /&gt;
|1.02&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamtools&lt;br /&gt;
|2.30 / 2.5.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|basilisk&lt;br /&gt;
|v2019&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bayescan&lt;br /&gt;
|2.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|beast&lt;br /&gt;
|1.8.4 / 2.4.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|beast2&lt;br /&gt;
|2.6.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bedops&lt;br /&gt;
|2.4.40&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bedtools&lt;br /&gt;
|2.30.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bigwig&lt;br /&gt;
|011921&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|biobwa&lt;br /&gt;
|0.7.17&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bioperl&lt;br /&gt;
|1.6.923&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|blast&lt;br /&gt;
|2.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bowtie2&lt;br /&gt;
|2.2.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bpp&lt;br /&gt;
|4.4.0 / 4.4.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cblas&lt;br /&gt;
|1.20.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cmaq&lt;br /&gt;
|5.3.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cmdstan&lt;br /&gt;
|2.21.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cp2k&lt;br /&gt;
|2.5.1 / 3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cryoSPARC&lt;br /&gt;
|2.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|diamond&lt;br /&gt;
|0.7.9&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|doxygen&lt;br /&gt;
|2014&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|dualSP&lt;br /&gt;
|4.2 / 4.3_beta&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eautils&lt;br /&gt;
|02072017&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eclipse_ptp&lt;br /&gt;
|8.1.2 / 9.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eigen&lt;br /&gt;
|3.2.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|emacs&lt;br /&gt;
|25.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|exabayes&lt;br /&gt;
|1.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|examl&lt;br /&gt;
|3.0.17&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fdppdiv&lt;br /&gt;
|20140728&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fds_smv&lt;br /&gt;
|6.1.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ferret&lt;br /&gt;
|6.96&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|freetype&lt;br /&gt;
|2.5.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fsplit&lt;br /&gt;
|092214&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ga&lt;br /&gt;
|5.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gamess-us&lt;br /&gt;
|4.14.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gamma&lt;br /&gt;
|20111212&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gap&lt;br /&gt;
|4.6.5 / 4.7.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gatk&lt;br /&gt;
|3.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gdc&lt;br /&gt;
|1.0.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gerris&lt;br /&gt;
|09.30.16_EPYC / 12.06.13_BM / 12.06.13_EPYC / 12.06.13_PNZ&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ghc&lt;br /&gt;
|7.8.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|glog&lt;br /&gt;
|0.3.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gnuplot&lt;br /&gt;
|4.6.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gpaw&lt;br /&gt;
|1.2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gperf&lt;br /&gt;
|3.0.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gpu-blast&lt;br /&gt;
|2.2.28&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gromacs&lt;br /&gt;
|5.1.4 / 5.1.5 / 2020.1_mpi&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|help2man&lt;br /&gt;
|1.47.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|hoomd&lt;br /&gt;
|1.3.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|hpl&lt;br /&gt;
|2.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|humann&lt;br /&gt;
|0.7.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ima2p&lt;br /&gt;
|071717&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ipyrad&lt;br /&gt;
|0.7.13&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|itasser&lt;br /&gt;
|4.2 / 5.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|kokkos&lt;br /&gt;
|2.9.00&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|lammps&lt;br /&gt;
|03.03.20&lt;br /&gt;
|06.02.22&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|lmdb&lt;br /&gt;
|20160810&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ls-dyna&lt;br /&gt;
|6.0.0 / 7.1.2 / 8.0.0 / 8.1.0 / 9.1.0 / 10.0.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|maker&lt;br /&gt;
|2.31.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mathematica&lt;br /&gt;
|10.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|matlab&lt;br /&gt;
|R2022a&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|meme&lt;br /&gt;
|4.11.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mercurial&lt;br /&gt;
|2.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|metaphlan2&lt;br /&gt;
|2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|migrate&lt;br /&gt;
|4.2.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|minpath&lt;br /&gt;
|1.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mpest&lt;br /&gt;
|1.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mpfr&lt;br /&gt;
|3.1.2 / 3.1.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mrbayes&lt;br /&gt;
|3.2.7a&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|msbayes&lt;br /&gt;
|20140305&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mumps&lt;br /&gt;
|5.1.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|muscle&lt;br /&gt;
|3.8.31&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mutect&lt;br /&gt;
|1.1.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|namd&lt;br /&gt;
|2.9 / 2.12 / 2.13 / 2.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nciplot&lt;br /&gt;
|4.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ncview&lt;br /&gt;
|2.1.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nek5000&lt;br /&gt;
|19.0_epyc&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|netCDF&lt;br /&gt;
|4.2 / 4.3.2 / 4.4.1 / 4.4.3 / 4.5.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nmrpipe&lt;br /&gt;
|20170909&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nrlmol&lt;br /&gt;
|10.0.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nvtop&lt;br /&gt;
|1.0.0_git&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nwchem&lt;br /&gt;
|6.6 / 6.8.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|octopus&lt;br /&gt;
|4.1.1 / 4.1.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|openblas&lt;br /&gt;
|2.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|opencv&lt;br /&gt;
|3.1.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|openeye&lt;br /&gt;
|latest&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|openfoam&lt;br /&gt;
|2.3.0 / 5.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|openmm&lt;br /&gt;
|5.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|opensees&lt;br /&gt;
|5921 / 5955&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|openshmem&lt;br /&gt;
|1.0h&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|openwatcom&lt;br /&gt;
|1.9&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|orca&lt;br /&gt;
|5.0.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|pargap&lt;br /&gt;
|1.3.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|perl&lt;br /&gt;
|5.24.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|pfft&lt;br /&gt;
|040415&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|phase&lt;br /&gt;
|2.1.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|plasma&lt;br /&gt;
|2.5 / 2.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|plumed&lt;br /&gt;
|2.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|pyYAML&lt;br /&gt;
|3.10&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|pyrad&lt;br /&gt;
|3.0.66&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qespresso&lt;br /&gt;
|6.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qiime&lt;br /&gt;
|1.9.1_FULL_P2.7 / 1.9.1_FULL_P3.6 / 1.9.1 / 2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qiime2&lt;br /&gt;
|2020.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|quantumEspresso&lt;br /&gt;
|6.6_cpu&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|qvina&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|raxml&lt;br /&gt;
|0.1.0_ng / 8.2.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|relion&lt;br /&gt;
|1.3_gcc / 3.0 / 3.6_apollo_gnu / 3.6_apollo_intel_i7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|repast&lt;br /&gt;
|2.1.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rtfbsDB&lt;br /&gt;
|012021&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sage&lt;br /&gt;
|6.2 / 7.5.1 / 8.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|samtools&lt;br /&gt;
|1.3.1 / 1.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|scons&lt;br /&gt;
|2.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|singularity&lt;br /&gt;
|2.4.6 / 3.1.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|snap&lt;br /&gt;
|11.29.13 / 1.18_beta&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sparskit&lt;br /&gt;
|030515&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|splatche&lt;br /&gt;
|2.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sprint&lt;br /&gt;
|1.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sqlite&lt;br /&gt;
|3.12.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|sra&lt;br /&gt;
|2.5.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|stacks&lt;br /&gt;
|1.23&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|stan&lt;br /&gt;
|2.22.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|stata&lt;br /&gt;
|12&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|stem&lt;br /&gt;
|2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|structurama&lt;br /&gt;
|2.2.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|swig&lt;br /&gt;
|4.0.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|tau&lt;br /&gt;
|2.30&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|tcl&lt;br /&gt;
|8.6.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|tensorflow&lt;br /&gt;
|r0.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|texinfo&lt;br /&gt;
|6.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|tinker&lt;br /&gt;
|6.2 / 8.8.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|tmux&lt;br /&gt;
|2.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|trinity&lt;br /&gt;
|2.1.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|usearch&lt;br /&gt;
|5.2.236 / 6.0.307 / 6.1.544 / 7.0.1090 / 8.0.1517 / 8.0.1623 / 9.0.2124 &lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|util-linux&lt;br /&gt;
|2.29&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|valgrind&lt;br /&gt;
|3.10.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|vmd&lt;br /&gt;
|1.9.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|vsearch&lt;br /&gt;
|1.0.5 / 1.10.2 / 1.11.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|wrf&lt;br /&gt;
|3.5.0 / 4.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|xmgrace&lt;br /&gt;
|5.1.23 / 5.1.25&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|xplor&lt;br /&gt;
|2.38 / 2.39&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
__NOTOC__&amp;lt;/div&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;This is an index of available applications sorted by their academic relevance, as well as alphabetically.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For information about using modules to run your applications go to [[Using Modules To Run Your Applications]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==Computational Physics and Computational Chemistry== &lt;br /&gt;
Applications in this section use classical mechanics, quantum mechanics and thermodynamics and are applied in simulation studies of fundamental properties of atoms, molecules, and chemical reactions.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Computational Biology== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for ‘omics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from &lt;br /&gt;
genetic data, using differences in allele frequencies between populations.  BayeScan is &lt;br /&gt;
based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an&lt;br /&gt;
island model in which subpopulation allele frequencies are correlated through a common &lt;br /&gt;
migrant gene pool from which they differ in varying degrees.  The difference in allele frequency &lt;br /&gt;
between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios &lt;br /&gt;
where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html]&lt;br /&gt;
and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our &lt;br /&gt;
installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the &lt;br /&gt;
European Bioinformatics Institute, Cambridge, UK.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[VELVET]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Computational Genomics, Proteonics, Microbiomics, Genetics==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --&amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=2&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GENOMEPOP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Math, Engineering, Computer Science== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenSEES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSEES, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSEES is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Economics, Business, Statistics, Analytics==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot;&amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;R-devel&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
R is a language and environment for statistical computing and graphics. R-devel provides both core R userspace and all R development components.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:*data entry, retrieval, management, and mining&lt;br /&gt;
:*report writing and graphics&lt;br /&gt;
:*statistical analysis&lt;br /&gt;
:*business planning, forecasting, and decision support&lt;br /&gt;
:*operations research and project management&lt;br /&gt;
:*quality improvement&lt;br /&gt;
:*applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:*platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==General Development Systems==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Coming soon.&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==Tools, Libraries, Compilers==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:*Karle under /usr/bin/gnuplot&lt;br /&gt;
:*Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/ gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
* Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
*Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Alphabetical List ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==A== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for genomics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==B== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from genetic data, using differences in allele frequencies between populations.  BayeScan is based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an island model in which subpopulation allele frequencies are correlated through a common migrant gene pool from which they differ in varying degrees.  The difference in allele frequency between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html] and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==C== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==D== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==E== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N &amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SLURM -l select=1:ncpus=2 &lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==F== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==G== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:*Karle under /usr/bin/gnuplot&lt;br /&gt;
:*Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/ gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GenomePop2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==H==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&lt;br /&gt;
HOOMD&#039;s object-oriented design patterns make it both versatile and expandable. Various types of potentials, integration methods&lt;br /&gt;
and file formats are currently supported, and more are added with each release. The code is available and open source, so anyone&lt;br /&gt;
can write a plugin or change the source to add additional functionality.  Simulations are configured and run using simple python&lt;br /&gt;
scripts, allowing complete control over the force field choice, integrator, all parameters, how many time steps are run, etc.&lt;br /&gt;
The scripting system is designed to be as simple as possible to the non-programmer.&lt;br /&gt;
&lt;br /&gt;
The HOOMD development effort is led by the Glotzer group at the University of Michigan, but many groups from different universities&lt;br /&gt;
have contributed code that is now part of the HOOMD main package, see the credits page for the full list. The HOOMD website and&lt;br /&gt;
documentation are available here [http://codeblue.umich.edu/hoomd-blue/about.html]. More information about our installation can be&lt;br /&gt;
found here [[HOOMD]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==I==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==J==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==L==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== M==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
* Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==N==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==O== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenSees&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSees, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSees is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==P== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
*Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Installing Python packages&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Users may install python packages/modules in their own space.  Many packages available in Python repositories can be installed easily with PIP manager, which is available  in any of Anaconda and Miniconda builds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Users must remember that using PIP without first loading the module for python will cause the installed modules to match system python on login node only. However the python interpreter available (after login module) on all nodes is installed in /share/usr/compilers/python space. Thus when installing packages in user space it is very important to follow the procedure outlined below. The given example demonstrates how users can install package &amp;quot;guppy&amp;quot; in their own space:&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Anaconda build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/2.7.13_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Anaconda build&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/3.6.0_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Miniconda&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda2&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Miniconda 3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda3&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check if the package is properly installed type:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip list | grep guppy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==Q== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==R== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot;&amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==S== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:*data entry, retrieval, management, and mining&lt;br /&gt;
:*report writing and graphics&lt;br /&gt;
:*statistical analysis&lt;br /&gt;
:*business planning, forecasting, and decision support&lt;br /&gt;
:*operations research and project management&lt;br /&gt;
:*quality improvement&lt;br /&gt;
:*applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==U== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==V== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==W== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==X== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=168</id>
		<title>Administrative Information</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=168"/>
		<updated>2022-11-18T15:22:45Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
==How to get an account==&lt;br /&gt;
===User accounts overview===&lt;br /&gt;
HPCC supports 4 types of accounts: &#039;&#039;&#039;a)&#039;&#039;&#039; CUNY full time faculty and research staff &#039;&#039;&#039;b)&#039;&#039;&#039; CUNY adjunct faculty and master students &#039;&#039;&#039;c)&#039;&#039;&#039; doctoral graduate students &#039;&#039;&#039;d)&#039;&#039;&#039; undergraduate students &#039;&#039;&#039;e)&#039;&#039;&#039; collaborators from other universities working with CUNY faculty, and &#039;&#039;&#039;f)&#039;&#039;&#039;  public and private sector partners directly collaborating with CUNY faculty/department/school/center. Users from all &#039;&#039;&#039;a) to f)&#039;&#039;&#039; groups may receive authorization to use the CUNY HPC Center systems.  Applications for accounts are accepted at any time. The accounts in groups &#039;&#039;&#039;a)&#039;&#039;&#039; and &#039;&#039;&#039;c)&#039;&#039;&#039; are valid  for one year and must be renewed on or before Sept 30th each year. Accounts from &#039;&#039;&#039;b)&#039;&#039;&#039; and &#039;&#039;&#039;d)&#039;&#039;&#039; are valid for one semester and must be renewed at the beginning of fall and spring semester. The accounts in groups &#039;&#039;&#039;e)&#039;&#039;&#039; and &#039;&#039;&#039;f)&#039;&#039;&#039; are good for the duration of collaborative project.  In addition non-CUNY researchers can obtain a research account at CUNY-HPCC and use the resources by paying cost recovery fee (proportional to use). Please contact the director of  HPCC for details. &lt;br /&gt;
&lt;br /&gt;
A user account is issued to an &#039;&#039;&#039;&#039;&#039;individual user&#039;&#039;&#039;.&#039;&#039; Accounts are &#039;&#039;&#039;not to be shared&#039;&#039;&#039;.  Users are responsible for choosing secure passwords  and to protect their passwords. Passwords &#039;&#039;&#039;are not to be shared.&#039;&#039;&#039; Users from &#039;&#039;&#039;a), b), c)&#039;&#039;&#039; and &#039;&#039;&#039;d)&#039;&#039;&#039; groups must have and use a &#039;&#039;&#039;valid CUNY e-mail address&#039;&#039;&#039; when register to HPC.  Public mail accounts such as those on gmail, hotmail, yahoo ,outlook etc. can be used only as a second e-mail. Users in group &#039;&#039;&#039;e)&#039;&#039;&#039; must provide e-mail which can be verified and state the CUNY collaborator e-mail as second mailbox. The users from group &#039;&#039;&#039;f)&#039;&#039;&#039; and outside users doing research on CUNY HPCC must  provide valid professional e-mail address.  about their CUNY  counterpart(s) including valid CUNY e-mail(s). Note that HPCC &#039;&#039;&#039;will not send&#039;&#039;&#039; warning and any messages to non CUNY e-mails unless is needed for emergency and registered users from groups &#039;&#039;&#039;e)&#039;&#039;&#039; and &#039;&#039;&#039;f).&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Please submit request for new account directly to HPC via hpchelp@csi.cuny.edu  e-mail.   Please provide answer all questions before submit back to hpchelp@csi.cuny.edu. Pleas ego not forget to provide information about past and pending publications and funded projects. Think carefully about the resources needed and try to estimate as accurate as possible.  Note that &#039;&#039;&#039;by applying for and obtaining an account, the user agrees to the Center’s [https://cunyhpc.csi.cuny.edu/acceptableuse Acceptable Use Policy] and the [https://cunyhpc.csi.cuny.edu/passwordpolicy User Account and Password Policy].&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In a case the above mentioned link to hpcreg1 is down the users may send an e-mail to hpchelp@csi.cuny.edu with the following information:&lt;br /&gt;
&lt;br /&gt;
1. Full name as stated at the CUNY ID card:&lt;br /&gt;
&lt;br /&gt;
2. For group e) only - Full name as stated at the University ID card (Jeanette  Smith):&lt;br /&gt;
&lt;br /&gt;
3. For group f) only- Full name as stated on  State ID or Federal ID (e.g. Smith John Peter): &lt;br /&gt;
&lt;br /&gt;
4.  CUNY EID and valid CUNY e-mail (e.g. John A. Smith  22341356 jsmith@csi.cuny.edu): &lt;br /&gt;
&lt;br /&gt;
5. Affiliation within CUNY - campus name and Department ( e.g Lehman College, Biology):&lt;br /&gt;
&lt;br /&gt;
6. Affiliation outside CUNY if any(e.g. Rutgers University) and valid professional e-mail: ( e.g. John Doe, Rutgers University, jd@rutgers.edu)&lt;br /&gt;
&lt;br /&gt;
7. Department at above (6) institution: &lt;br /&gt;
&lt;br /&gt;
8. Second CUNY affiliation - campus name and Department (e.g. Graduate Center, Biology):&lt;br /&gt;
&lt;br /&gt;
9. E-mail at (6):&lt;br /&gt;
&lt;br /&gt;
10. Academic status ( faculty, adjunct faculty, graduate student, undergraduate student, research staff, collaborator to CUNY researcher, partners , external researcher):&lt;br /&gt;
&lt;br /&gt;
11. Who will be is responsible for cost recovery charges, if any? Name and e-mail: &lt;br /&gt;
&lt;br /&gt;
12. Brief project description and project duration. In case of teaching/participating in class please state class number and semester (e.g. CS 456, fall 2023): &lt;br /&gt;
&lt;br /&gt;
13. Comma separated list of Principal investigator(s) or research advisor(s) name, status, campus and  department: &lt;br /&gt;
&lt;br /&gt;
14. Resources needed: &lt;br /&gt;
&lt;br /&gt;
- cpu cores&lt;br /&gt;
&lt;br /&gt;
- GPU cores&lt;br /&gt;
&lt;br /&gt;
- cpu hours&lt;br /&gt;
&lt;br /&gt;
- GPU hours&lt;br /&gt;
&lt;br /&gt;
- storage GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. Consent that if CUNY-HPCC moves to fee-for-service model you or your PI/advisor(s) will ensure funds to support your project(s) ( fees for cpu time, storage and backup of data). &lt;br /&gt;
&lt;br /&gt;
16. Consent that financial support for computational resources will be included in all future proposals to funding agencies. &lt;br /&gt;
&lt;br /&gt;
17. Consent that HPCC will be cited properly (see our wiki for details) in all your published work including conferences and talks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Upon creation every research user account is provided with 50 GB (and max of 10000 files on /global/u) home directory mounted as &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;. If required, a user may request an increase in the size of their home directory. The HPC Center will endeavor to satisfy reasonable requests. If you expect to have more than 10 000 files please zip several small files into larger single zip file. Please keep only wrangled information in your space in order to optimize use of the existing storage. &lt;br /&gt;
&lt;br /&gt;
Student class accounts ( group d)) are provided  with 10 GB home directory.  Please note that class accounts and data will be deleted 30 days after the semester ends (unless otherwise agreed upon). Students are responsible for backing up their own data prior to the end of the semester.&lt;br /&gt;
 &lt;br /&gt;
When a user account is established, only the user has read/write to his files.  The user can change his UNIX permissions to allow others in his group to read/write to his file.&lt;br /&gt;
&lt;br /&gt;
Please be sure to notify the HPC Center if user accounts need to be removed or added to a specific group.&lt;br /&gt;
&lt;br /&gt;
===Reset Password ===&lt;br /&gt;
&lt;br /&gt;
Users must use automatic password reset system.  Click on [https://hpcauth4.csi.cuny.edu/reset/ Reset Password].  Upon resetting the users will get their individual security token on the e-mail address registered with HPCC.&lt;br /&gt;
&lt;br /&gt;
===Close of account=== &lt;br /&gt;
If a user would like to close their account, please contact the CUNY at HPCHelp@csi.cuny.edu. &lt;br /&gt;
Supervisors that would like to modify the access of their researchers and/or students working for them should contact the HPC to remove, add or modify access.&lt;br /&gt;
User accounts that are not accessed or renewed for 2+ years will be purged along with any data associated with the account.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Project accounts===&lt;br /&gt;
Users and/or groups requiring additional disk storage and/or iRods accounts are required to fill out a Project Application Form (PAF). This form is to be filled out by the principal investigator (PI) of the project, and will provide project details including but not limited to: project and grant information, group members required access shared project files, and project needs (disk space, software, etc.). &lt;br /&gt;
&lt;br /&gt;
All members of the group will need to fill out a User Account Form (UAF) before accounts and access can be granted to them. Supervisors and/or their designated project managers will be responsible for providing access and/or limitations to their assigned group members. [Details on this process are described in the Projects section.]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;IN DEVELOPMENT:&lt;br /&gt;
&lt;br /&gt;
The Project Application Form can be found at the following link [http://www.csi.cuny.edu/cunyhpc/Accounts.html Project Application form].&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Message of the day (MOTD)==&lt;br /&gt;
Users are encouraged to read the &amp;quot;Message of the day” (MOTD), which is displayed to the user upon logging onto a system.  The MOTD provides information on scheduled maintenance time when systems will be unavailable and/or important changes in the environment that are of import to the user community.  The MOTD is the HPC Center’s only efficient mechanism for communicating to the broader user community as bulk email messages are often blocked by CUNY SPAM filters.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Required citations==&lt;br /&gt;
The CUNY HPC Center appreciates the support it has received from the National Science Foundation (NSF).  It is the policy of NSF that researchers who are funded by NSF or who make use of facilities funded by NSF acknowledge the contribution of NSF by including the following citation in their papers and presentations:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;This research was supported, in part, under National Science Foundation Grants: CNS-0958379, CNS-0855217, ACI-1126113 and OEC-2215760 (2022) and the City University of New York High Performance Computing Center at the College of Staten Island.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The HPC Center, therefore, requests its users to follow this procedure as it helps the Center demonstrate that NSF’s investments aided the research and educational missions of the University.&lt;br /&gt;
&lt;br /&gt;
==Reporting requirements==&lt;br /&gt;
The Center reports on its support of the research and educational community to both NSF and CUNY on an annual basis.  Citations are an important factor that is included in these reports.  Therefore, the Center requests all users to send copies of research papers developed, in part, using the HPC Center resources to [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu].  This also helps the Center to keep abreast of user research requirement directions and needs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CUNY HPC systems’ naming conventions==&lt;br /&gt;
The Center names its systems after noteworthy CUNY alumni.  There are many good reasons for this:&lt;br /&gt;
&lt;br /&gt;
:•	It honors the accomplishments of these alumni.&lt;br /&gt;
:•	It informs or reminds students of the accomplishments of former CUNY students and, hopefully, inspires them.&lt;br /&gt;
:•	It heightens public awareness of the contributions of these alumni and of the role played by CUNY.&lt;br /&gt;
&lt;br /&gt;
The current systems at the HPC Center are named after Kenneth Appel, Bruce Chizen, Andy Grove, Jonas Salk, Robert Kahn, and Arno Penzias.  More information on each of these persons and systems follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ANDY&#039;&#039;&#039; is named in honor of Dr. Andrew S. Grove, a City College alumnus and one of the founders of the Intel Corporation. It is an SGI cluster with 744 processor cores. &#039;&#039;&#039;ANDY&#039;&#039;&#039; is for jobs using 64 cores or fewer and for Gaussian jobs.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;APPEL&#039;&#039;&#039; is named in honor of Dr. Kenneth Appel (pronounced ah-PEL), an alumnus of Queens College.  Appel, along with Wolfgang Haken, used computers to assist in proving the 4-color theorem.  Appel said, “Most mathematicians, even as late as the 1970s, had no real interest in learning about computers. It was almost as if those of us who enjoyed playing with computers were doing something nonmathematical or suspect.”  &#039;&#039;&#039;APPEL&#039;&#039;&#039; is a SGI UV300 with 384 cores and 12 terabytes of shared memory—a system nicely configured to solve problems in computational group theory—and group theory was Appel’s area of research.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;CHIZEN&#039;&#039;&#039; is named in honor of Bruce Chizen, former CEO of Adobe, and a Brooklyn College alumnus.  &#039;&#039;&#039;CHIZEN&#039;&#039;&#039; is the system that is used as a gateway to the above HPC systems.  It is not used for computations. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;KARLE&#039;&#039;&#039; is named in honor of Dr. Jerome Karle, an alumnus of the City College of New York who was awarded the Nobel Prize in Chemistry in 1985.  &#039;&#039;&#039;KARLE&#039;&#039;&#039; is a Dell shared memory system with 24 processor cores.  &#039;&#039;&#039;KARLE&#039;&#039;&#039; is used for serial jobs, Matlab, SAS, parallel Mathematica, and certain ARCview jobs.  It is the only system that supports running interactive jobs relying on graphical user interface.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;PENZIAS&#039;&#039;&#039; is named in honor of Dr. Arno Penzias, a Nobel Laureate in Physics, and a City College alumnus. &#039;&#039;&#039;PENZIAS&#039;&#039;&#039; is a cluster with 1,152 Intel Sandy Bridge cores each with 4 Gbytes of memory.  It is divided into 2 virtual nodes, one with 12 cores and no GPUs and one with 4 cores and 2 GPUs. It is used for applications requiring up to 128 cores. It also supports 136 NVIDIA Kepler K20 accelerators.&lt;br /&gt;
&lt;br /&gt;
==Funding==&lt;br /&gt;
The systems at Center were funded as follows: &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;DSMS&#039;&#039;&#039;, NSF Grant ACI-1126113&lt;br /&gt;
:&#039;&#039;&#039;ANDY&#039;&#039;&#039;, 	NSF Grant CNS-0855217 and the New York City Council through the efforts of Borough President James Oddo&lt;br /&gt;
:&#039;&#039;&#039;APPEL&#039;&#039;&#039;, New York State Regional Economic Development Grant through the efforts of State Senator Diane Savino&lt;br /&gt;
:&#039;&#039;&#039;PENZIAS&#039;&#039;&#039;, The Office of the CUNY Chief Information Officer&lt;br /&gt;
:&#039;&#039;&#039;SALK&#039;&#039;&#039;, NSF Grant CNS-0958379 and a New York State Regional Economic Development Grant&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=165</id>
		<title>Applications</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=165"/>
		<updated>2022-11-07T18:56:18Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div class=&amp;quot;noautonum&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Application&lt;br /&gt;
!Installed Version&lt;br /&gt;
!Current Version&lt;br /&gt;
!Dependencies&lt;br /&gt;
|-&lt;br /&gt;
|ABINIT&lt;br /&gt;
|8.2.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ASE&lt;br /&gt;
|3.18.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|G-PhoCS&lt;br /&gt;
|1.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|GMP&lt;br /&gt;
|6.1.2-GCCcore-6.4.0/ 7.3.0/ 8.2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|GPAW&lt;br /&gt;
|19.8.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|Gerris&lt;br /&gt;
|20131206&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|HDF5&lt;br /&gt;
|1.8.17/1.10.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|LAME&lt;br /&gt;
|3.100&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|XML-Parser&lt;br /&gt;
|2.44_01&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|abyss&lt;br /&gt;
|1.3.7 / 1.5.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|adcirc&lt;br /&gt;
|50_99_07&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|adda&lt;br /&gt;
|1.2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|anvio&lt;br /&gt;
|2.0.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|armadillo&lt;br /&gt;
|9.2.7 / 9.200.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|arpack&lt;br /&gt;
|3.1.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|augustus&lt;br /&gt;
|3.2.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|autodock&lt;br /&gt;
|4.2.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|autodock_vina&lt;br /&gt;
|1.1.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamm&lt;br /&gt;
|2.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamova&lt;br /&gt;
|1.02&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamtools&lt;br /&gt;
|2.30 / 2.5.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|basilisk&lt;br /&gt;
|v2019&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bayescan&lt;br /&gt;
|2.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|beast&lt;br /&gt;
|1.8.4 / 2.4.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|beast2&lt;br /&gt;
|2.6.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bedops&lt;br /&gt;
|2.4.40&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bedtools&lt;br /&gt;
|2.30.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bigwig&lt;br /&gt;
|011921&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|biobwa&lt;br /&gt;
|0.7.17&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bioperl&lt;br /&gt;
|1.6.923&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|blast&lt;br /&gt;
|2.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bowtie2&lt;br /&gt;
|2.2.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bpp&lt;br /&gt;
|4.4.0 / 4.4.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cblas&lt;br /&gt;
|1.20.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cmaq&lt;br /&gt;
|5.3.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cmdstan&lt;br /&gt;
|2.21.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cp2k&lt;br /&gt;
|2.5.1 / 3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cryoSPARC&lt;br /&gt;
|2.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|diamond&lt;br /&gt;
|0.7.9&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|doxygen&lt;br /&gt;
|2014&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|dualSP&lt;br /&gt;
|4.2 / 4.3_beta&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eautils&lt;br /&gt;
|02072017&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eclipse_ptp&lt;br /&gt;
|8.1.2 / 9.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eigen&lt;br /&gt;
|3.2.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|emacs&lt;br /&gt;
|25.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|exabayes&lt;br /&gt;
|1.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|examl&lt;br /&gt;
|3.0.17&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fdppdiv&lt;br /&gt;
|20140728&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fds_smv&lt;br /&gt;
|6.1.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ferret&lt;br /&gt;
|6.96&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|freetype&lt;br /&gt;
|2.5.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fsplit&lt;br /&gt;
|092214&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ga&lt;br /&gt;
|5.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gamess-us&lt;br /&gt;
|4.14.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gamma&lt;br /&gt;
|20111212&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gap&lt;br /&gt;
|4.6.5 / 4.7.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gatk&lt;br /&gt;
|3.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gdc&lt;br /&gt;
|1.0.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gerris&lt;br /&gt;
|09.30.16_EPYC / 12.06.13_BM / 12.06.13_EPYC / 12.06.13_PNZ&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ghc&lt;br /&gt;
|7.8.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|glog&lt;br /&gt;
|0.3.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gnuplot&lt;br /&gt;
|4.6.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gpaw&lt;br /&gt;
|1.2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gperf&lt;br /&gt;
|3.0.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gpu-blast&lt;br /&gt;
|2.2.28&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gromacs&lt;br /&gt;
|5.1.4 / 5.1.5 / 2020.1_mpi&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|help2man&lt;br /&gt;
|1.47.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|hoomd&lt;br /&gt;
|1.3.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|hpl&lt;br /&gt;
|2.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|humann&lt;br /&gt;
|0.7.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ima2p&lt;br /&gt;
|071717&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ipyrad&lt;br /&gt;
|0.7.13&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|itasser&lt;br /&gt;
|4.2 / 5.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|kokkos&lt;br /&gt;
|2.9.00&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|lammps&lt;br /&gt;
|03.03.20&lt;br /&gt;
|06.02.22&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|lmdb&lt;br /&gt;
|20160810&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ls-dyna&lt;br /&gt;
|6.0.0 / 7.1.2 / 8.0.0 / 8.1.0 / 9.1.0 / 10.0.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|maker&lt;br /&gt;
|2.31.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mathematica&lt;br /&gt;
|10.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|matlab&lt;br /&gt;
|R2022a&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|meme&lt;br /&gt;
|4.11.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mercurial&lt;br /&gt;
|2.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|metaphlan2&lt;br /&gt;
|2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|migrate&lt;br /&gt;
|4.2.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|minpath&lt;br /&gt;
|1.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mpest&lt;br /&gt;
|1.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mpfr&lt;br /&gt;
|3.1.2 / 3.1.4&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mrbayes&lt;br /&gt;
|3.2.7a&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|msbayes&lt;br /&gt;
|20140305&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mumps&lt;br /&gt;
|5.1.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|muscle&lt;br /&gt;
|3.8.31&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mutect&lt;br /&gt;
|1.1.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|namd&lt;br /&gt;
|2.9 / 2.12 / 2.13 / 2.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nciplot&lt;br /&gt;
|4.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ncview&lt;br /&gt;
|2.1.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nek5000&lt;br /&gt;
|19.0_epyc&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|netCDF&lt;br /&gt;
|4.2 / 4.3.2 / 4.4.1 / 4.4.3 / 4.5.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|nmrpipe&lt;br /&gt;
|20170909&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
__NOTOC__&amp;lt;/div&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;This is an index of available applications sorted by their academic relevance, as well as alphabetically.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For information about using modules to run your applications go to [[Using Modules To Run Your Applications]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==Computational Physics and Computational Chemistry== &lt;br /&gt;
Applications in this section use classical mechanics, quantum mechanics and thermodynamics and are applied in simulation studies of fundamental properties of atoms, molecules, and chemical reactions.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Computational Biology== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for ‘omics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from &lt;br /&gt;
genetic data, using differences in allele frequencies between populations.  BayeScan is &lt;br /&gt;
based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an&lt;br /&gt;
island model in which subpopulation allele frequencies are correlated through a common &lt;br /&gt;
migrant gene pool from which they differ in varying degrees.  The difference in allele frequency &lt;br /&gt;
between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios &lt;br /&gt;
where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html]&lt;br /&gt;
and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our &lt;br /&gt;
installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the &lt;br /&gt;
European Bioinformatics Institute, Cambridge, UK.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[VELVET]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Computational Genomics, Proteonics, Microbiomics, Genetics==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --&amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=2&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GENOMEPOP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Math, Engineering, Computer Science== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenSEES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSEES, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSEES is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Economics, Business, Statistics, Analytics==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot;&amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;R-devel&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
R is a language and environment for statistical computing and graphics. R-devel provides both core R userspace and all R development components.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:*data entry, retrieval, management, and mining&lt;br /&gt;
:*report writing and graphics&lt;br /&gt;
:*statistical analysis&lt;br /&gt;
:*business planning, forecasting, and decision support&lt;br /&gt;
:*operations research and project management&lt;br /&gt;
:*quality improvement&lt;br /&gt;
:*applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:*platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==General Development Systems==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Coming soon.&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==Tools, Libraries, Compilers==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:*Karle under /usr/bin/gnuplot&lt;br /&gt;
:*Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/ gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
* Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
*Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Alphabetical List ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==A== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for genomics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==B== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from genetic data, using differences in allele frequencies between populations.  BayeScan is based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an island model in which subpopulation allele frequencies are correlated through a common migrant gene pool from which they differ in varying degrees.  The difference in allele frequency between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html] and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==C== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==D== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==E== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N &amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SLURM -l select=1:ncpus=2 &lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==F== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==G== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:*Karle under /usr/bin/gnuplot&lt;br /&gt;
:*Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/ gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GenomePop2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==H==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&lt;br /&gt;
HOOMD&#039;s object-oriented design patterns make it both versatile and expandable. Various types of potentials, integration methods&lt;br /&gt;
and file formats are currently supported, and more are added with each release. The code is available and open source, so anyone&lt;br /&gt;
can write a plugin or change the source to add additional functionality.  Simulations are configured and run using simple python&lt;br /&gt;
scripts, allowing complete control over the force field choice, integrator, all parameters, how many time steps are run, etc.&lt;br /&gt;
The scripting system is designed to be as simple as possible to the non-programmer.&lt;br /&gt;
&lt;br /&gt;
The HOOMD development effort is led by the Glotzer group at the University of Michigan, but many groups from different universities&lt;br /&gt;
have contributed code that is now part of the HOOMD main package, see the credits page for the full list. The HOOMD website and&lt;br /&gt;
documentation are available here [http://codeblue.umich.edu/hoomd-blue/about.html]. More information about our installation can be&lt;br /&gt;
found here [[HOOMD]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==I==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==J==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==L==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== M==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
* Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==N==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==O== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenSees&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSees, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSees is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==P== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
*Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Installing Python packages&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Users may install python packages/modules in their own space.  Many packages available in Python repositories can be installed easily with PIP manager, which is available  in any of Anaconda and Miniconda builds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Users must remember that using PIP without first loading the module for python will cause the installed modules to match system python on login node only. However the python interpreter available (after login module) on all nodes is installed in /share/usr/compilers/python space. Thus when installing packages in user space it is very important to follow the procedure outlined below. The given example demonstrates how users can install package &amp;quot;guppy&amp;quot; in their own space:&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Anaconda build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/2.7.13_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Anaconda build&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/3.6.0_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Miniconda&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda2&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Miniconda 3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda3&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check if the package is properly installed type:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip list | grep guppy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==Q== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==R== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot;&amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==S== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:*data entry, retrieval, management, and mining&lt;br /&gt;
:*report writing and graphics&lt;br /&gt;
:*statistical analysis&lt;br /&gt;
:*business planning, forecasting, and decision support&lt;br /&gt;
:*operations research and project management&lt;br /&gt;
:*quality improvement&lt;br /&gt;
:*applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==U== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==V== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==W== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==X== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=164</id>
		<title>Applications</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=164"/>
		<updated>2022-11-07T18:41:53Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div class=&amp;quot;noautonum&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Application&lt;br /&gt;
!Installed Version&lt;br /&gt;
!Current Version&lt;br /&gt;
!Dependencies&lt;br /&gt;
|-&lt;br /&gt;
|ABINIT&lt;br /&gt;
|8.2.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ASE&lt;br /&gt;
|3.18.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|G-PhoCS&lt;br /&gt;
|1.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|GMP&lt;br /&gt;
|6.1.2-GCCcore-6.4.0/ 7.3.0/ 8.2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|GPAW&lt;br /&gt;
|19.8.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|Gerris&lt;br /&gt;
|20131206&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|HDF5&lt;br /&gt;
|1.8.17/1.10.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|LAME&lt;br /&gt;
|3.100&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|XML-Parser&lt;br /&gt;
|2.44_01&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|abyss&lt;br /&gt;
|1.3.7 / 1.5.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|adcirc&lt;br /&gt;
|50_99_07&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|adda&lt;br /&gt;
|1.2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|anvio&lt;br /&gt;
|2.0.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|armadillo&lt;br /&gt;
|9.2.7 / 9.200.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|arpack&lt;br /&gt;
|3.1.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|augustus&lt;br /&gt;
|3.2.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|autodock&lt;br /&gt;
|4.2.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|autodock_vina&lt;br /&gt;
|1.1.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamm&lt;br /&gt;
|2.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamova&lt;br /&gt;
|1.02&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamtools&lt;br /&gt;
|2.30 / 2.5.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|basilisk&lt;br /&gt;
|v2019&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bayescan&lt;br /&gt;
|2.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|beast&lt;br /&gt;
|1.8.4 / 2.4.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|beast2&lt;br /&gt;
|2.6.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bedops&lt;br /&gt;
|2.4.40&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bedtools&lt;br /&gt;
|2.30.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bigwig&lt;br /&gt;
|011921&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|biobwa&lt;br /&gt;
|0.7.17&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bioperl&lt;br /&gt;
|1.6.923&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|blast&lt;br /&gt;
|2.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bowtie2&lt;br /&gt;
|2.2.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bpp&lt;br /&gt;
|4.4.0 / 4.4.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cblas&lt;br /&gt;
|1.20.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cmaq&lt;br /&gt;
|5.3.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cmdstan&lt;br /&gt;
|2.21.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cp2k&lt;br /&gt;
|2.5.1 / 3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cryoSPARC&lt;br /&gt;
|2.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|diamond&lt;br /&gt;
|0.7.9&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|doxygen&lt;br /&gt;
|2014&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|dualSP&lt;br /&gt;
|4.2 / 4.3_beta&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eautils&lt;br /&gt;
|02072017&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eclipse_ptp&lt;br /&gt;
|8.1.2 / 9.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eigen&lt;br /&gt;
|3.2.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|emacs&lt;br /&gt;
|25.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|exabayes&lt;br /&gt;
|1.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|examl&lt;br /&gt;
|3.0.17&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fdppdiv&lt;br /&gt;
|20140728&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fds_smv&lt;br /&gt;
|6.1.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ferret&lt;br /&gt;
|6.96&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|freetype&lt;br /&gt;
|2.5.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fsplit&lt;br /&gt;
|092214&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ga&lt;br /&gt;
|5.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gamess-us&lt;br /&gt;
|4.14.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gamma&lt;br /&gt;
|20111212&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gap&lt;br /&gt;
|4.6.5 / 4.7.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
__NOTOC__&amp;lt;/div&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;This is an index of available applications sorted by their academic relevance, as well as alphabetically.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For information about using modules to run your applications go to [[Using Modules To Run Your Applications]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==Computational Physics and Computational Chemistry== &lt;br /&gt;
Applications in this section use classical mechanics, quantum mechanics and thermodynamics and are applied in simulation studies of fundamental properties of atoms, molecules, and chemical reactions.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Computational Biology== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for ‘omics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from &lt;br /&gt;
genetic data, using differences in allele frequencies between populations.  BayeScan is &lt;br /&gt;
based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an&lt;br /&gt;
island model in which subpopulation allele frequencies are correlated through a common &lt;br /&gt;
migrant gene pool from which they differ in varying degrees.  The difference in allele frequency &lt;br /&gt;
between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios &lt;br /&gt;
where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html]&lt;br /&gt;
and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our &lt;br /&gt;
installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the &lt;br /&gt;
European Bioinformatics Institute, Cambridge, UK.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[VELVET]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Computational Genomics, Proteonics, Microbiomics, Genetics==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --&amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=2&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GENOMEPOP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Math, Engineering, Computer Science== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenSEES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSEES, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSEES is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Economics, Business, Statistics, Analytics==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot;&amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;R-devel&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
R is a language and environment for statistical computing and graphics. R-devel provides both core R userspace and all R development components.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:*data entry, retrieval, management, and mining&lt;br /&gt;
:*report writing and graphics&lt;br /&gt;
:*statistical analysis&lt;br /&gt;
:*business planning, forecasting, and decision support&lt;br /&gt;
:*operations research and project management&lt;br /&gt;
:*quality improvement&lt;br /&gt;
:*applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:*platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==General Development Systems==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Coming soon.&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==Tools, Libraries, Compilers==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:*Karle under /usr/bin/gnuplot&lt;br /&gt;
:*Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/ gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
* Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
*Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Alphabetical List ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==A== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for genomics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==B== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from genetic data, using differences in allele frequencies between populations.  BayeScan is based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an island model in which subpopulation allele frequencies are correlated through a common migrant gene pool from which they differ in varying degrees.  The difference in allele frequency between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html] and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==C== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==D== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==E== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N &amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SLURM -l select=1:ncpus=2 &lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==F== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==G== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:*Karle under /usr/bin/gnuplot&lt;br /&gt;
:*Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/ gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GenomePop2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==H==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&lt;br /&gt;
HOOMD&#039;s object-oriented design patterns make it both versatile and expandable. Various types of potentials, integration methods&lt;br /&gt;
and file formats are currently supported, and more are added with each release. The code is available and open source, so anyone&lt;br /&gt;
can write a plugin or change the source to add additional functionality.  Simulations are configured and run using simple python&lt;br /&gt;
scripts, allowing complete control over the force field choice, integrator, all parameters, how many time steps are run, etc.&lt;br /&gt;
The scripting system is designed to be as simple as possible to the non-programmer.&lt;br /&gt;
&lt;br /&gt;
The HOOMD development effort is led by the Glotzer group at the University of Michigan, but many groups from different universities&lt;br /&gt;
have contributed code that is now part of the HOOMD main package, see the credits page for the full list. The HOOMD website and&lt;br /&gt;
documentation are available here [http://codeblue.umich.edu/hoomd-blue/about.html]. More information about our installation can be&lt;br /&gt;
found here [[HOOMD]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==I==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==J==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==L==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== M==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
* Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==N==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==O== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;OpenSees&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSees, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSees is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==P== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
*Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Installing Python packages&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Users may install python packages/modules in their own space.  Many packages available in Python repositories can be installed easily with PIP manager, which is available  in any of Anaconda and Miniconda builds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Users must remember that using PIP without first loading the module for python will cause the installed modules to match system python on login node only. However the python interpreter available (after login module) on all nodes is installed in /share/usr/compilers/python space. Thus when installing packages in user space it is very important to follow the procedure outlined below. The given example demonstrates how users can install package &amp;quot;guppy&amp;quot; in their own space:&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Anaconda build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/2.7.13_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Anaconda build&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/3.6.0_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Miniconda&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda2&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Miniconda 3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda3&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check if the package is properly installed type:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip list | grep guppy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==Q== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==R== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot;&amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==S== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:*data entry, retrieval, management, and mining&lt;br /&gt;
:*report writing and graphics&lt;br /&gt;
:*statistical analysis&lt;br /&gt;
:*business planning, forecasting, and decision support&lt;br /&gt;
:*operations research and project management&lt;br /&gt;
:*quality improvement&lt;br /&gt;
:*applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==T== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==U== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==V== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==W== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
==X== &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot;&amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=163</id>
		<title>Applications</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=163"/>
		<updated>2022-11-07T18:40:08Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div class=&amp;quot;noautonum&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Application&lt;br /&gt;
!Installed Version&lt;br /&gt;
!Current Version&lt;br /&gt;
!Dependencies&lt;br /&gt;
|-&lt;br /&gt;
|ABINIT&lt;br /&gt;
|8.2.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ASE&lt;br /&gt;
|3.18.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|G-PhoCS&lt;br /&gt;
|1.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|GMP&lt;br /&gt;
|6.1.2-GCCcore-6.4.0/ 7.3.0/ 8.2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|GPAW&lt;br /&gt;
|19.8.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|Gerris&lt;br /&gt;
|20131206&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|HDF5&lt;br /&gt;
|1.8.17/1.10.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|LAME&lt;br /&gt;
|3.100&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|XML-Parser&lt;br /&gt;
|2.44_01&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|abyss&lt;br /&gt;
|1.3.7 / 1.5.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|adcirc&lt;br /&gt;
|50_99_07&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|adda&lt;br /&gt;
|1.2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|anvio&lt;br /&gt;
|2.0.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|armadillo&lt;br /&gt;
|9.2.7 / 9.200.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|arpack&lt;br /&gt;
|3.1.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|augustus&lt;br /&gt;
|3.2.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|autodock&lt;br /&gt;
|4.2.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|autodock_vina&lt;br /&gt;
|1.1.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamm&lt;br /&gt;
|2.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamova&lt;br /&gt;
|1.02&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamtools&lt;br /&gt;
|2.30 / 2.5.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|basilisk&lt;br /&gt;
|v2019&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bayescan&lt;br /&gt;
|2.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|beast&lt;br /&gt;
|1.8.4 / 2.4.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|beast2&lt;br /&gt;
|2.6.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bedops&lt;br /&gt;
|2.4.40&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bedtools&lt;br /&gt;
|2.30.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bigwig&lt;br /&gt;
|011921&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|biobwa&lt;br /&gt;
|0.7.17&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bioperl&lt;br /&gt;
|1.6.923&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|blast&lt;br /&gt;
|2.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bowtie2&lt;br /&gt;
|2.2.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bpp&lt;br /&gt;
|4.4.0 / 4.4.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cblas&lt;br /&gt;
|1.20.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cmaq&lt;br /&gt;
|5.3.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cmdstan&lt;br /&gt;
|2.21.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cp2k&lt;br /&gt;
|2.5.1 / 3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cryoSPARC&lt;br /&gt;
|2.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|diamond&lt;br /&gt;
|0.7.9&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|doxygen&lt;br /&gt;
|2014&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|dualSP&lt;br /&gt;
|4.2 / 4.3_beta&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eautils&lt;br /&gt;
|02072017&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eclipse_ptp&lt;br /&gt;
|8.1.2 / 9.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eigen&lt;br /&gt;
|3.2.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|emacs&lt;br /&gt;
|25.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|exabayes&lt;br /&gt;
|1.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|examl&lt;br /&gt;
|3.0.17&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fdppdiv&lt;br /&gt;
|20140728&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fds_smv&lt;br /&gt;
|6.1.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ferret&lt;br /&gt;
|6.96&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|freetype&lt;br /&gt;
|2.5.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fsplit&lt;br /&gt;
|092214&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ga&lt;br /&gt;
|5.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gamess-us&lt;br /&gt;
|4.14.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gamma&lt;br /&gt;
|20111212&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gap&lt;br /&gt;
|4.6.5 / 4.7.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
__NOTOC__&amp;lt;/div&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;This is an index of available applications sorted by their academic relevance, as well as alphabetically.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For information about using modules to run your applications go to [[Using Modules To Run Your Applications]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class= &amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Computational Physics and Computational Chemistry == &lt;br /&gt;
Applications in this section use classical mechanics, quantum mechanics and thermodynamics and are applied in simulation studies of fundamental properties of atoms, molecules, and chemical reactions.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Computational Biology == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for ‘omics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from &lt;br /&gt;
genetic data, using differences in allele frequencies between populations.  BayeScan is &lt;br /&gt;
based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an&lt;br /&gt;
island model in which subpopulation allele frequencies are correlated through a common &lt;br /&gt;
migrant gene pool from which they differ in varying degrees.  The difference in allele frequency &lt;br /&gt;
between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios &lt;br /&gt;
where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html]&lt;br /&gt;
and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our &lt;br /&gt;
installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the &lt;br /&gt;
European Bioinformatics Institute, Cambridge, UK.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[VELVET]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Computational Genomics, Proteonics, Microbiomics, Genetics ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --&amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=2&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GENOMEPOP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Math, Engineering, Computer Science == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenSEES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSEES, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSEES is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Economics, Business, Statistics, Analytics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot; &amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R-devel&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
R is a language and environment for statistical computing and graphics. R-devel provides both core R userspace and all R development components.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:* data entry, retrieval, management, and mining&lt;br /&gt;
:* report writing and graphics&lt;br /&gt;
:* statistical analysis&lt;br /&gt;
:* business planning, forecasting, and decision support&lt;br /&gt;
:* operations research and project management&lt;br /&gt;
:* quality improvement&lt;br /&gt;
:* applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== General Development Systems ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Coming soon.&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Tools, Libraries, Compilers ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:* Karle under /usr/bin/gnuplot&lt;br /&gt;
:* Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/  gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
*Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
* Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Alphabetical List ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== A == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for genomics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== B == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from genetic data, using differences in allele frequencies between populations.  BayeScan is based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an island model in which subpopulation allele frequencies are correlated through a common migrant gene pool from which they differ in varying degrees.  The difference in allele frequency between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html] and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== C == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== D == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== E == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N &amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SLURM -l select=1:ncpus=2 &lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== F == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== G == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:* Karle under /usr/bin/gnuplot&lt;br /&gt;
:* Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/  gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GenomePop2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== H ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&lt;br /&gt;
HOOMD&#039;s object-oriented design patterns make it both versatile and expandable. Various types of potentials, integration methods&lt;br /&gt;
and file formats are currently supported, and more are added with each release. The code is available and open source, so anyone&lt;br /&gt;
can write a plugin or change the source to add additional functionality.  Simulations are configured and run using simple python&lt;br /&gt;
scripts, allowing complete control over the force field choice, integrator, all parameters, how many time steps are run, etc.&lt;br /&gt;
The scripting system is designed to be as simple as possible to the non-programmer.&lt;br /&gt;
&lt;br /&gt;
The HOOMD development effort is led by the Glotzer group at the University of Michigan, but many groups from different universities&lt;br /&gt;
have contributed code that is now part of the HOOMD main package, see the credits page for the full list. The HOOMD website and&lt;br /&gt;
documentation are available here [http://codeblue.umich.edu/hoomd-blue/about.html]. More information about our installation can be&lt;br /&gt;
found here [[HOOMD]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== I ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== J ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== L ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== M ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
*Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== N ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== O == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenSees&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSees, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSees is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== P == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
* Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Installing Python packages&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Users may install python packages/modules in their own space.  Many packages available in Python repositories can be installed easily with PIP manager, which is available  in any of Anaconda and Miniconda builds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Users must remember that using PIP without first loading the module for python will cause the installed modules to match system python on login node only. However the python interpreter available (after login module) on all nodes is installed in /share/usr/compilers/python space. Thus when installing packages in user space it is very important to follow the procedure outlined below. The given example demonstrates how users can install package &amp;quot;guppy&amp;quot; in their own space:&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Anaconda build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/2.7.13_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Anaconda build&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/3.6.0_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Miniconda&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda2&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Miniconda 3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda3&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check if the package is properly installed type:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip list | grep guppy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Q == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== R == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot; &amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== S == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:* data entry, retrieval, management, and mining&lt;br /&gt;
:* report writing and graphics&lt;br /&gt;
:* statistical analysis&lt;br /&gt;
:* business planning, forecasting, and decision support&lt;br /&gt;
:* operations research and project management&lt;br /&gt;
:* quality improvement&lt;br /&gt;
:* applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== T == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== U == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== V == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== W == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== X == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=162</id>
		<title>Applications</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=162"/>
		<updated>2022-11-07T18:39:30Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div class=&amp;quot;noautonum&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Application&lt;br /&gt;
!Installed Version&lt;br /&gt;
!Current Version&lt;br /&gt;
!Dependencies&lt;br /&gt;
|-&lt;br /&gt;
|ABINIT&lt;br /&gt;
|8.2.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ASE&lt;br /&gt;
|3.18.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|G-PhoCS&lt;br /&gt;
|1.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|GMP&lt;br /&gt;
|6.1.2-GCCcore-6.4.0/ 7.3.0/ 8.2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|GPAW&lt;br /&gt;
|19.8.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|Gerris&lt;br /&gt;
|20131206&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|HDF5&lt;br /&gt;
|1.8.17/1.10.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|LAME&lt;br /&gt;
|3.100&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|XML-Parser&lt;br /&gt;
|2.44_01&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|abyss&lt;br /&gt;
|1.3.7 / 1.5.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|adcirc&lt;br /&gt;
|50_99_07&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|adda&lt;br /&gt;
|1.2.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|anvio&lt;br /&gt;
|2.0.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|armadillo&lt;br /&gt;
|9.2.7 / 9.200.7&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|arpack&lt;br /&gt;
|3.1.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|augustus&lt;br /&gt;
|3.2.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|autodock&lt;br /&gt;
|4.2.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|autodock_vina&lt;br /&gt;
|1.1.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamm&lt;br /&gt;
|2.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamova&lt;br /&gt;
|1.02&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bamtools&lt;br /&gt;
|2.30 / 2.5.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|basilisk&lt;br /&gt;
|v2019&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bayescan&lt;br /&gt;
|2.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|beast&lt;br /&gt;
|1.8.4 / 2.4.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|beast2&lt;br /&gt;
|2.6.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bedops&lt;br /&gt;
|2.4.40&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bedtools&lt;br /&gt;
|2.30.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bigwig&lt;br /&gt;
|011921&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|biobwa&lt;br /&gt;
|0.7.17&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bioperl&lt;br /&gt;
|1.6.923&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|blast&lt;br /&gt;
|2.3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bowtie2&lt;br /&gt;
|2.2.6&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bpp&lt;br /&gt;
|4.4.0 / 4.4.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cblas&lt;br /&gt;
|1.20.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cmaq&lt;br /&gt;
|5.3.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cmdstan&lt;br /&gt;
|2.21.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cp2k&lt;br /&gt;
|2.5.1 / 3.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cryoSPARC&lt;br /&gt;
|2.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|diamond&lt;br /&gt;
|0.7.9&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|doxygen&lt;br /&gt;
|2014&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|dualSP&lt;br /&gt;
|4.2 / 4.3_beta&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eautils&lt;br /&gt;
|02072017&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eclipse_ptp&lt;br /&gt;
|8.1.2 / 9.0&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|eigen&lt;br /&gt;
|3.2.8&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|emacs&lt;br /&gt;
|25.1&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|exabayes&lt;br /&gt;
|1.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|examl&lt;br /&gt;
|3.0.17&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fdppdiv&lt;br /&gt;
|20140728&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fds_smv&lt;br /&gt;
|6.1.11&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ferret&lt;br /&gt;
|6.96&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|freetype&lt;br /&gt;
|2.5.2&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|fsplit&lt;br /&gt;
|092214&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|ga&lt;br /&gt;
|5.3&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gamess-us&lt;br /&gt;
|4.14.14&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gamma&lt;br /&gt;
|20111212&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|gap&lt;br /&gt;
|4.6.5 / 4.7.5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
__NOTOC__&amp;lt;/div&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;This is an index of available applications sorted by their academic relevance, as well as alphabetically.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For information about using modules to run your applications go to [[Using Modules To Run Your Applications]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class= &amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Computational Physics and Computational Chemistry == &lt;br /&gt;
Applications in this section use classical mechanics, quantum mechanics and thermodynamics and are applied in simulation studies of fundamental properties of atoms, molecules, and chemical reactions.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Computational Biology == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for ‘omics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from &lt;br /&gt;
genetic data, using differences in allele frequencies between populations.  BayeScan is &lt;br /&gt;
based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an&lt;br /&gt;
island model in which subpopulation allele frequencies are correlated through a common &lt;br /&gt;
migrant gene pool from which they differ in varying degrees.  The difference in allele frequency &lt;br /&gt;
between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios &lt;br /&gt;
where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html]&lt;br /&gt;
and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our &lt;br /&gt;
installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the &lt;br /&gt;
European Bioinformatics Institute, Cambridge, UK.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[VELVET]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Computational Genomics, Proteonics, Microbiomics, Genetics ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --&amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=2&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GENOMEPOP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Math, Engineering, Computer Science == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenSEES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSEES, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSEES is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Economics, Business, Statistics, Analytics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot; &amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R-devel&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
R is a language and environment for statistical computing and graphics. R-devel provides both core R userspace and all R development components.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:* data entry, retrieval, management, and mining&lt;br /&gt;
:* report writing and graphics&lt;br /&gt;
:* statistical analysis&lt;br /&gt;
:* business planning, forecasting, and decision support&lt;br /&gt;
:* operations research and project management&lt;br /&gt;
:* quality improvement&lt;br /&gt;
:* applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== General Development Systems ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Coming soon.&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Tools, Libraries, Compilers ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:* Karle under /usr/bin/gnuplot&lt;br /&gt;
:* Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/  gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
*Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
* Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Alphabetical List ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== A == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for genomics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== B == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from genetic data, using differences in allele frequencies between populations.  BayeScan is based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an island model in which subpopulation allele frequencies are correlated through a common migrant gene pool from which they differ in varying degrees.  The difference in allele frequency between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html] and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== C == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== D == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== E == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N &amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SLURM -l select=1:ncpus=2 &lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== F == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== G == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:* Karle under /usr/bin/gnuplot&lt;br /&gt;
:* Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/  gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GenomePop2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== H ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&lt;br /&gt;
HOOMD&#039;s object-oriented design patterns make it both versatile and expandable. Various types of potentials, integration methods&lt;br /&gt;
and file formats are currently supported, and more are added with each release. The code is available and open source, so anyone&lt;br /&gt;
can write a plugin or change the source to add additional functionality.  Simulations are configured and run using simple python&lt;br /&gt;
scripts, allowing complete control over the force field choice, integrator, all parameters, how many time steps are run, etc.&lt;br /&gt;
The scripting system is designed to be as simple as possible to the non-programmer.&lt;br /&gt;
&lt;br /&gt;
The HOOMD development effort is led by the Glotzer group at the University of Michigan, but many groups from different universities&lt;br /&gt;
have contributed code that is now part of the HOOMD main package, see the credits page for the full list. The HOOMD website and&lt;br /&gt;
documentation are available here [http://codeblue.umich.edu/hoomd-blue/about.html]. More information about our installation can be&lt;br /&gt;
found here [[HOOMD]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== I ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== J ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== L ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== M ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
*Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== N ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== O == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenSees&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSees, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSees is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== P == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
* Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Installing Python packages&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Users may install python packages/modules in their own space.  Many packages available in Python repositories can be installed easily with PIP manager, which is available  in any of Anaconda and Miniconda builds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Users must remember that using PIP without first loading the module for python will cause the installed modules to match system python on login node only. However the python interpreter available (after login module) on all nodes is installed in /share/usr/compilers/python space. Thus when installing packages in user space it is very important to follow the procedure outlined below. The given example demonstrates how users can install package &amp;quot;guppy&amp;quot; in their own space:&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Anaconda build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/2.7.13_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Anaconda build&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/3.6.0_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Miniconda&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda2&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Miniconda 3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda3&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check if the package is properly installed type:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip list | grep guppy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Q == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== R == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot; &amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== S == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:* data entry, retrieval, management, and mining&lt;br /&gt;
:* report writing and graphics&lt;br /&gt;
:* statistical analysis&lt;br /&gt;
:* business planning, forecasting, and decision support&lt;br /&gt;
:* operations research and project management&lt;br /&gt;
:* quality improvement&lt;br /&gt;
:* applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== T == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== U == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== V == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== W == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== X == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Dsms&amp;diff=161</id>
		<title>Dsms</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Dsms&amp;diff=161"/>
		<updated>2022-11-07T18:11:53Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Data Storage and Management System (DSMS)=&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
Key features of the &#039;&#039;&#039;DSMS&#039;&#039;&#039; system include:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;User&#039;&#039;&#039; home directories in a standard Unix file system called /global/u.&lt;br /&gt;
:•	Enhanced parallel scratch space on the HPC systems.&lt;br /&gt;
:•	&#039;&#039;&#039;Project&#039;&#039;&#039; directories in an Integrated Rule-Oriented Data-management System (iRODS) managed resource.  Project directories exist in a “virtual file space” called &#039;&#039;&#039;cunyZone&#039;&#039;&#039; which contains a resource called &#039;&#039;&#039;Storage Resource 1 (SR1)&#039;&#039;&#039;.    For the purpose of this document, we will use the terminology SR1 to describe &#039;&#039;&#039;Project file space.&#039;&#039;&#039;&lt;br /&gt;
:•	Automated backups.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;DSMS&#039;&#039;&#039; is the HPC Center’s primary file system and is accessible from all existing HPC systems, except for &#039;&#039;&#039;HERBERT&#039;&#039;&#039; . It will similarly be accessible from all future HPC systems.   &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;DSMS&#039;&#039;&#039; provides a 3-level data storage infrastructure: - &#039;&#039;&#039;HOME&#039;&#039;&#039; filesystem,  &#039;&#039;&#039;SCRATCH&#039;&#039;&#039; filesystems, &#039;&#039;&#039;SR1&#039;&#039;&#039; (long-tern storage resource)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DSMS&#039;&#039;&#039; features are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==&amp;quot;Home&amp;quot; directories are on &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; is a standard Unix file system that holds the home directories of individual users. When users request and are granted an allocation of HPC resources, they are assigned a &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; and a 50 GB allocation of disk space for home directories on &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;. These &#039;&#039;&#039;home&#039;&#039;&#039; directories are on the &#039;&#039;&#039;DSMS&#039;&#039;&#039;, not on the HPC systems, but can be accessed from any Center system. All home directories are backed up on weekly basis.&lt;br /&gt;
&lt;br /&gt;
==&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;==&lt;br /&gt;
Disk storage on the HPC systems is used only for &#039;&#039;&#039;scratch&#039;&#039;&#039; files.  &#039;&#039;&#039;scratch&#039;&#039;&#039; files are temporary and are &#039;&#039;&#039;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;not backed up&amp;lt;/font color&amp;gt;&#039;&#039;&#039;.  &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; is used by jobs queued for or in execution.  Output from jobs may temporarily be located in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039;.  &lt;br /&gt;
&lt;br /&gt;
In order to submit a job for execution, a user must &#039;&#039;&#039;stage&#039;&#039;&#039; or &#039;&#039;&#039;mount&#039;&#039;&#039; the files required by the job to &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; using UNIX commands and/or from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039; using &#039;&#039;&#039;iRODS&#039;&#039;&#039; commands.&lt;br /&gt;
&lt;br /&gt;
Files in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; on a system are &#039;&#039;&#039;automatically purged&#039;&#039;&#039; when (1) usage reaches 70% of available space, or (2) file residence on scratch exceeds two weeks, whichever occurs first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==“Project” directories==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;“Project”&#039;&#039;&#039; directories are managed through &#039;&#039;&#039;iRODS&#039;&#039;&#039; and accessible through iRODS commands, not standard UNIX commands.   In iRODS terminology, a “collection” is the equivalent of “directory”.&lt;br /&gt;
&lt;br /&gt;
A &#039;&#039;&#039;“Project”&#039;&#039;&#039; is an activity that usually involves multiple users and/or many individual data files.  A &#039;&#039;&#039;“Project”&#039;&#039;&#039; is normally led by a “Principal Investigator” (PI), who is a faculty member or a research scientist.   The PI is the individual responsible to the University or a granting agency for the “Project”.  The PI has overall responsibility for “Project” data and “Project” data management. To establish a Project, the PI completes and submits the online “Project Application Form”.&lt;br /&gt;
&lt;br /&gt;
Additional information on the &#039;&#039;&#039;DSMS&#039;&#039;&#039; is available in Section 4 of the User Manual &amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.csi.cuny.edu/cunyhpc/pdf/User_Manual.pdf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Typical Workflow==&lt;br /&gt;
Typical workflows in are described below:&lt;br /&gt;
&lt;br /&gt;
1. Copying files from a user’s home directory or from &#039;&#039;&#039;SR1&#039;&#039;&#039; to &#039;&#039;&#039;SCRATCH&#039;&#039;&#039;.&amp;lt;br /&amp;gt;&lt;br /&gt;
If working with &#039;&#039;&#039;HOME&#039;&#039;&#039;:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   cd /scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
   mkdir &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt; &amp;amp;&amp;amp; cd &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt;&lt;br /&gt;
   cp /global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font&amp;gt;/a.out ./&lt;br /&gt;
   cp /global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mydatafile &amp;lt;/font&amp;gt;./&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If working with &#039;&#039;&#039;SR1&#039;&#039;&#039;:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   cd /scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
   mkdir &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt; &amp;amp;&amp;amp; cd &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt;&lt;br /&gt;
   iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/a.out &lt;br /&gt;
   iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mydatafile&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Prepare SLURM job script. Typical SLURM sript is similar to the following:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   #!/bin/bash &lt;br /&gt;
   #SBATCH --partition production &lt;br /&gt;
   #SBATCH -J test &lt;br /&gt;
   #SBATCH --nodes 1 &lt;br /&gt;
   #SBATCH --ntasks 8 &lt;br /&gt;
   #SBATCH --mem 4000&lt;br /&gt;
   echo &amp;quot;Starting…&amp;quot; &lt;br /&gt;
&lt;br /&gt;
   cd $SLURM_SUBMIT_DIR&lt;br /&gt;
   mpirun -np 4 ./a.out ./&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mydatafile&amp;lt;/font color&amp;gt; &amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myoutputs&amp;lt;/font color&amp;gt;&lt;br /&gt;
   echo &amp;quot;Done…&amp;quot;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
Your SLURM may be different depending on your needs. Read section Submitting Jobs for a reference.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Run the job &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   sbatch ./&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_script&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Once job is finished, clean up &#039;&#039;&#039;SCRATCH&#039;&#039;&#039; and store outputs in your user home directory or in &#039;&#039;&#039;SR1&#039;&#039;&#039;.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If working with &#039;&#039;&#039;HOME&#039;&#039;&#039;:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   mv ./myoutputs /global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/.&lt;br /&gt;
   cd ../&lt;br /&gt;
   rm -rf &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If working with &#039;&#039;&#039;SR1&#039;&#039;&#039;:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   iput ./myoutputs &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/. &lt;br /&gt;
   cd ../&lt;br /&gt;
   rm -rf &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. If output files are stored in &#039;&#039;&#039;SR1&#039;&#039;&#039; tag them with metadata.&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   imeta addw -d myoutput zvalue 15 meters&lt;br /&gt;
   imeta addw -d myoutput colorLabel RED&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== iRODS ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;iRODS&#039;&#039;&#039; is the integrated Rule-Oriented Data-management System, a&lt;br /&gt;
community-driven, open source, data grid software solution. &#039;&#039;&#039;iRODS&#039;&#039;&#039; is&lt;br /&gt;
designed to abstract data services from data storage hardware and&lt;br /&gt;
provide users with hardware-agnostic way to manipulate data. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;iRODS&#039;&#039;&#039; is a primary tool that is used by the CUNY HPCC users to&lt;br /&gt;
seamlessly access 1PB storage resource (further referenced as &#039;&#039;&#039;SR1&#039;&#039;&#039;&lt;br /&gt;
here) from any of the HPCC&#039;s computational systems.&lt;br /&gt;
&lt;br /&gt;
Access to &#039;&#039;&#039;SR1&#039;&#039;&#039; is provided via so-called &#039;&#039;&#039;i-commands&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;iinit&lt;br /&gt;
ils&lt;br /&gt;
imv&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comprehesive list of i-commands with detailed description can be&lt;br /&gt;
obtained at [https://wiki.irods.org/index.php/icommands iRODS wiki].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To obtain quick help on any of the commands while being logged into&lt;br /&gt;
any of the HPCC&#039;s machines type &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;i-command -h&#039;&#039;&#039;&amp;lt;/font&amp;gt;. For example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ils -h&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
Following is the list of some of the most relevant &#039;&#039;&#039;i-commands&#039;&#039;&#039;:&lt;br /&gt;
  &lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iinit&amp;lt;/font&amp;gt;&#039;&#039;&#039; -- Initialize session and store your password in a scrambled form for automatic use by other icommands.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iput&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Store a file&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iget&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Get a file&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;imkdir&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like mkdir, make an iRODS collection (similar to a directory or Windows folder)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ichmod&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like chmod, allow (or later restrict) access to your data objects by other users.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;icp&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like cp or rcp, copy an iRODS data object&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;irm&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like rm, remove an iRODS data object&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ils&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like ls, list iRODS data objects (files) and collections (directories)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ipwd&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like pwd, print the iRODS current working directory&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;icd&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like cd, change the iRODS current working directory&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ichksum&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Checksum one or more data-object or collection from iRODS space.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;imv&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Moves/renames an irods data-object or collection.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;irmtrash&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Remove one or more data-object or collection from a RODS trash bin.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;imeta&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Add, remove, list, or query user-defined Attribute-Value-Unit triplets metadata&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iquest&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Query (pose a question to) the ICAT, via a SQL-like interface&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Before using any of the i-commands users need to identify themselves to the iRODS server running command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# iinit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and providing HPCC&#039;s password. &lt;br /&gt;
&lt;br /&gt;
Typical workflow that involves operations on files stored in SR1&lt;br /&gt;
include storing/getting data to and from SR1, tagging data with &lt;br /&gt;
metadata, searching for data, sharing (setting permissions). &lt;br /&gt;
&lt;br /&gt;
==== Storing data to SR1 ====&lt;br /&gt;
 &lt;br /&gt;
1. Create &#039;&#039;&#039;iRODS&#039;&#039;&#039; directory (aka &#039;collection&#039;):&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   # imkdir &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
2. Store all files &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;myfile*&#039;&#039;&#039;&#039;&amp;lt;/font face&amp;gt; into this directory (collection):&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   # iput -r &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt; myfile* myProject&amp;lt;/font color&amp;gt;/.&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
3. Verify that files are stored:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   # ils&lt;br /&gt;
   /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;:&lt;br /&gt;
   C- /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
   # ils &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
   /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;:&lt;br /&gt;
      myfile1&lt;br /&gt;
      myfile2&lt;br /&gt;
      myfile3&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
Symbol &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;C-&#039;&amp;lt;/font&amp;gt;&#039;&#039;&#039; in the beginning of output of &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;ils&#039;&amp;lt;/font&amp;gt;&#039;&#039;&#039; shows that listed item is a collection.&lt;br /&gt;
&lt;br /&gt;
4. Combining &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;ils&#039;, &#039;imkdir&#039;, &#039;iput&#039;, &#039;icp&#039;, &#039;ipwd&#039;, &#039;imv&#039;&#039;&#039;&#039;&amp;lt;/font&amp;gt; user can create iRODS directories and store files in them similarly to what is normally done with UNIX commands &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;ls&#039;, &#039;mkdir&#039;, &#039;cp&#039;, &#039;pwd&#039;, &#039;mv&#039;&#039;&#039;&#039;&amp;lt;/font&amp;gt; etc...&lt;br /&gt;
&lt;br /&gt;
==== Getting data from SR1 ====&lt;br /&gt;
&lt;br /&gt;
1. To copy file from SR1 to current working directory run&lt;br /&gt;
   # iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myfile1&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Now listing current working directory should reveal &#039;&#039;&#039;myfile1&#039;&#039;&#039;:&lt;br /&gt;
   # ls&lt;br /&gt;
   &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myfile1&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Instead of individual files the whole directory (with&lt;br /&gt;
sub-directories) can be copied with &#039;&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;-r&amp;lt;/font&amp;gt;&#039;&#039;&#039;&#039; flag (stands for&lt;br /&gt;
&#039;recursive&#039;)&lt;br /&gt;
   # iget -r &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NOTE: wildcards are not supported, therefore the command below &amp;lt;u&amp;gt;will not work&amp;lt;/u&amp;gt;&lt;br /&gt;
   # iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myfile&amp;lt;/font color&amp;gt;*&lt;br /&gt;
&lt;br /&gt;
=== Tagging data with metadata ===&lt;br /&gt;
   &lt;br /&gt;
iRODS provides users with extremely powerful mechanism of managing&lt;br /&gt;
data with metadata. While working with large datasets it&#039;s&lt;br /&gt;
sometimes easy to forget what is stored in this or the other file.&lt;br /&gt;
Metadata tags help organizing data in a very easy and reliable&lt;br /&gt;
manner.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s tag files from previous example with some metadata:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta add -d myProject/myfile1 zvalue 15 meters&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile1 colorLabel RED&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile1 comment &amp;quot;This is file number 1&amp;quot;&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile2 zvalue 10 meters&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile2 colorLabel RED&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile2 comment &amp;quot;This is file number 2&amp;quot;&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile3 zvalue 15 meters&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile3 colorLabel BLUE&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile3 comment &amp;quot;This is file number 3&amp;quot;&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here we&#039;ve tagged myfile1 with 3 metadata labels:&lt;br /&gt;
&lt;br /&gt;
- zvalue 10 meters&lt;br /&gt;
&lt;br /&gt;
- colorlabel RED&lt;br /&gt;
&lt;br /&gt;
- comment &amp;quot;This is file number 1&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
Similar tags were added to &#039;myfile2&#039; and &#039;myfile3&#039;&lt;br /&gt;
&lt;br /&gt;
Metadata come in form of AVU -- Attribute|Value|Unit. As seen from&lt;br /&gt;
the above examples Unit is not necessary. &lt;br /&gt;
&lt;br /&gt;
Let&#039;s list all metadata assigned to file &#039;myfie1&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta ls -d myProject/myfile1&lt;br /&gt;
AVUs defined for dataObj myProject/myfile1:&lt;br /&gt;
attribute: zvalue&lt;br /&gt;
value: 15&lt;br /&gt;
units: meters&lt;br /&gt;
----&lt;br /&gt;
attribute: colorLabel&lt;br /&gt;
value: RED&lt;br /&gt;
units:&lt;br /&gt;
----&lt;br /&gt;
attribute: comment&lt;br /&gt;
value: This is file number 1&lt;br /&gt;
units:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
To remove an AVU assigned to a file run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta rm -d myProject/myfile1 zvalue 15 meters&lt;br /&gt;
# imeta ls -d myProject/myfile1&lt;br /&gt;
AVUs defined for dataObj myProject/myfile1:&lt;br /&gt;
attribute: colorLabel&lt;br /&gt;
value: RED&lt;br /&gt;
units:&lt;br /&gt;
----&lt;br /&gt;
attribute: comment&lt;br /&gt;
value: This is file number 1&lt;br /&gt;
units:&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# imeta add -d myProject/myfile1 zvalue 15 meters&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Metadata may be assigned to directories as well:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta add -C myProject simulationsPool 1&lt;br /&gt;
# imeta ls -C myProject&lt;br /&gt;
AVUs defined for collection myProject:&lt;br /&gt;
attribute: simulationsPool&lt;br /&gt;
value: 1&lt;br /&gt;
units:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the &#039;-C&#039; key that is used instead of &#039;-d&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Searching for data ===&lt;br /&gt;
&lt;br /&gt;
Power of metadata becomes obvious when data needs to be found in&lt;br /&gt;
large collections. Here is an illustration of how easy this task is&lt;br /&gt;
done with iRODS via imeta queries:&lt;br /&gt;
&lt;br /&gt;
 # imeta qu -d zvalue = 15&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject&lt;br /&gt;
 dataObj: myfile1&lt;br /&gt;
 ----&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject&lt;br /&gt;
 dataObj: myfile3&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We see both files that were tagged with label &#039;zvalue 10 meters&#039;.&lt;br /&gt;
Here is different query:&lt;br /&gt;
 &lt;br /&gt;
 # imeta qu -d colorLabel = RED&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/myProject&lt;br /&gt;
 dataObj: myfile1&lt;br /&gt;
 ----&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject&lt;br /&gt;
 dataObj: myfile2&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Another powerful mechanism to query data is provided with &#039;iquest&#039;. &lt;br /&gt;
Following is a number of examples that show &#039;iquest&#039; capabilities:&lt;br /&gt;
 &lt;br /&gt;
 iquest &amp;quot;SELECT DATA_NAME, DATA_SIZE WHERE DATA_RESC_NAME like &#039;cuny%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;For %-12.12s size is %s&amp;quot; &amp;quot;SELECT DATA_NAME ,  DATA_SIZE  WHERE COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;SELECT COLL_NAME WHERE COLL_NAME like &#039;/cunyZone/home/%&#039; AND USER_NAME like &#039;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;User %-6.6s has %-5.5s access to file %s&amp;quot; &amp;quot;SELECT USER_NAME,  DATA_ACCESS_NAME, DATA_NAME WHERE COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot; %-5.5s access has been given to user %-6.6s for the file %s&amp;quot; &amp;quot;SELECT DATA_ACCESS_NAME, USER_NAME, DATA_NAME WHERE COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest no-distinct &amp;quot;select META_DATA_ATTR_NAME&amp;quot;&lt;br /&gt;
 iquest  &amp;quot;select COLL_NAME, DATA_NAME WHERE DATA_NAME like &#039;myfile%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;User %-9.9s uses %14.14s bytes in %8.8s files in &#039;%s&#039;&amp;quot; &amp;quot;SELECT USER_NAME, sum(DATA_SIZE),count(DATA_NAME),RESC_NAME&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select sum(DATA_SIZE) where COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select sum(DATA_SIZE) where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select sum(DATA_SIZE), RESC_NAME where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select order_desc(DATA_ID) where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select count(DATA_ID) where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select RESC_NAME where RESC_CLASS_NAME IN (&#039;bundle&#039;,&#039;archive&#039;)&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select DATA_NAME,DATA_SIZE where DATA_SIZE BETWEEN &#039;100000&#039; &#039;100200&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Sharing data ===&lt;br /&gt;
&lt;br /&gt;
Access to the data can be controlled via &#039;ichmod&#039; command. It&#039;s&lt;br /&gt;
behavior is similar to UNIX &#039;chmod&#039; command. For example if there is a&lt;br /&gt;
need to provide user &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&#039;&#039;&#039;&amp;lt;userid&amp;gt;&#039;&#039;&#039;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt; with read access to file&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;myProject/myfile1&#039;&#039;&#039;&amp;lt;/font&amp;gt; execute the following command:&lt;br /&gt;
   ichmod read &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid1&amp;gt;&amp;lt;/font color&amp;gt; myProject/myfile1&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see who has access to a file/directory use:&lt;br /&gt;
   # ils -A myProject/myfile1&lt;br /&gt;
   /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject/myfile1&lt;br /&gt;
   ACL - &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid1&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
   #cunyZone:read object   &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;#cunyZone:own&lt;br /&gt;
&lt;br /&gt;
In the above example user &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid1&amp;gt;&amp;lt;/font color&amp;gt;&#039;&#039;&#039;&amp;lt;/font&amp;gt; has read access to the file and&lt;br /&gt;
user &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; is an owner of the file. &lt;br /&gt;
&lt;br /&gt;
Possible levels of access to a data object are null/read/write/own.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Backups==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Backups.&#039;&#039;&#039;	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; user directories and &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039; Project files are backed up automatically to a remote tape silo system over a fiber optic network.  Backups are performed daily. &lt;br /&gt;
&lt;br /&gt;
If the user deletes a file from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; or &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039;, it will remain on the tape silo system for 30 days, after which it will be deleted and cannot be recovered.   If a user, within the 30 day window finds it necessary to recover a file, the user must expeditiously submit a request to [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu].&lt;br /&gt;
&lt;br /&gt;
Less frequently accessed files are automatically transferred to the HPC Center robotic tape system, freeing up space in the disk storage pool and making it available for more actively used files. The selection criteria for the migration are age and size of a file. If a file is not accessed for 90 days, it may be moved to a tape in the tape library – in fact to two tapes, for backup. This is fully transparent to the user. When a file is needed, the system will copy the file back to the appropriate disk directory. No user action is required.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Data retention and account expiration policy==&lt;br /&gt;
&lt;br /&gt;
Project directories on SR1 are retained as long as the project is active.  The HPC Center will coordinate with the Principal Investigator of the project before deleting a project directory.  If the PI is no longer with CUNY, the HPC Center will coordinate with the PI’s departmental chair or Research Dean, whichever is appropriate.&lt;br /&gt;
&lt;br /&gt;
For user accounts, current user directories under /global/u are retained as long as the account is active.  If a user account is inactive for one year, the HPC Center will attempt to contact the user and request that the data be removed from the system.  If there is no response from the user within three months of the initial notice, or if the user cannot be reached, the user directory will be purged. &lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==DSMS Technical Summary==&lt;br /&gt;
&lt;br /&gt;
[[Image:dsms-summary.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039; is tuned for high bandwidth, redundancy, and resilience.  It is not optimal for handling large quantities of small files. If you need to archive more than a thousand of files on &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039;, please create a single archive using &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar&amp;lt;/font&amp;gt;&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
•	A separate &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; exists on each system.  On PENZIAS, SALK, KARLE, and ANDY, this is a Lustre parallel file system, on HERBERT it is NFS. These &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; directories are visible on the login and compute nodes of the system only and on the data transfer nodes, but are not shared across HPC systems.&lt;br /&gt;
&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; is used as a high performance parallel scratch filesystem, for example, temporary files (e.g. restart files) should be stored here.&lt;br /&gt;
&lt;br /&gt;
•	There are no quotas on &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;, however any files older than 2 weeks are automatically deleted.  Also, a cleanup script is scheduled to run every two weeks or whenever the /scratch disk space utilization exceeds 70%.  Dot-files are generally left intact from these cleanup jobs.&lt;br /&gt;
&lt;br /&gt;
•	/scratch space is available to all users. If the scratch space is exhausted, jobs will not be able to run. Purge any files in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;, which are no longer needed, even before the automatic deletion kicks in.&lt;br /&gt;
&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; directory may be empty when you login, you will need to copy any files required for submitting your jobs (submission scripts, data sets) from &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;/global/u&#039;&#039;&#039;&amp;lt;/font&amp;gt; or from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039;.  Once your jobs complete copy any files you need to keep back to &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; or &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039; and remove all files from /scratch.&lt;br /&gt;
&lt;br /&gt;
•	Do not use &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/tmp&amp;lt;/font&amp;gt;&#039;&#039;&#039; for storing temporary files. The file system where &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/tmp&amp;lt;/font&amp;gt;&#039;&#039;&#039; resides in memory is very small and slow. Files will be regularly deleted by automatic procedures.&lt;br /&gt;
&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; is not backed up and there is no provision for retaining data stored in these directories.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Good data handling practices==&lt;br /&gt;
===DSMS, i.e., /global/u and SR1===&lt;br /&gt;
&lt;br /&gt;
•	The &#039;&#039;&#039;DSMS&#039;&#039;&#039; is not an archive for non-HPC users. It is an archive for users who are processing data at the HPC Center.  “Parking” files on the &#039;&#039;&#039;DSMS&#039;&#039;&#039; as a back-up to local data stores is prohibited.  &lt;br /&gt;
&lt;br /&gt;
•	Do not store more than 1,000 files in a single directory. Store collections of small files into an archive (for example, tar). Note that for every file, a stub of about 4MB is kept on disk even if the rest of the file is migrated to tape, meaning that even migrated files take up some disk space. It also means that files smaller than the stub size are never migrated to tape because that would not make sense.  Storing a large number of small files in a single directory degrades the file system performance. &lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===/scratch===&lt;br /&gt;
&lt;br /&gt;
•	Please regularly remove unwanted files and directories and avoid keeping duplicate copies in multiple locations. File transfer among the HPC Center systems is very fast. It is forbidden to use &amp;quot;touch jobs&amp;quot; to prevent the cleaning policy from automatically deleting your files from the &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; directories. Use &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xmvf&amp;lt;/font&amp;gt;&#039;&#039;&#039;, not &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xvf&amp;lt;/font&amp;gt;&#039;&#039;&#039; to unpack files.   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xmvf&amp;lt;/font&amp;gt;&#039;&#039;&#039; updates the times stamp on the unpacked files.  The &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xvf&amp;lt;/font&amp;gt;&#039;&#039;&#039; command preserves the time stamp from the original file and not the time when the archive was unpacked. Consequently, the automatic deletion mechanism may remove files unpacked by &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar –xvf&amp;lt;/font&amp;gt;&#039;&#039;&#039;, which are only a few days old.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=160</id>
		<title>Applications</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=160"/>
		<updated>2022-11-07T18:11:53Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div class=&amp;quot;noautonum&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Application&lt;br /&gt;
!Installed Version&lt;br /&gt;
!Current Version&lt;br /&gt;
!Dependencies&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
__NOTOC__&amp;lt;/div&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;This is an index of available applications sorted by their academic relevance, as well as alphabetically.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For information about using modules to run your applications go to [[Using Modules To Run Your Applications]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class= &amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Computational Physics and Computational Chemistry == &lt;br /&gt;
Applications in this section use classical mechanics, quantum mechanics and thermodynamics and are applied in simulation studies of fundamental properties of atoms, molecules, and chemical reactions.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Computational Biology == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for ‘omics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from &lt;br /&gt;
genetic data, using differences in allele frequencies between populations.  BayeScan is &lt;br /&gt;
based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an&lt;br /&gt;
island model in which subpopulation allele frequencies are correlated through a common &lt;br /&gt;
migrant gene pool from which they differ in varying degrees.  The difference in allele frequency &lt;br /&gt;
between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios &lt;br /&gt;
where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html]&lt;br /&gt;
and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our &lt;br /&gt;
installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the &lt;br /&gt;
European Bioinformatics Institute, Cambridge, UK.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[VELVET]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Computational Genomics, Proteonics, Microbiomics, Genetics ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --&amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=2&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GENOMEPOP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Math, Engineering, Computer Science == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenSEES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSEES, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSEES is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Economics, Business, Statistics, Analytics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot; &amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R-devel&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
R is a language and environment for statistical computing and graphics. R-devel provides both core R userspace and all R development components.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:* data entry, retrieval, management, and mining&lt;br /&gt;
:* report writing and graphics&lt;br /&gt;
:* statistical analysis&lt;br /&gt;
:* business planning, forecasting, and decision support&lt;br /&gt;
:* operations research and project management&lt;br /&gt;
:* quality improvement&lt;br /&gt;
:* applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== General Development Systems ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Coming soon.&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Tools, Libraries, Compilers ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:* Karle under /usr/bin/gnuplot&lt;br /&gt;
:* Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/  gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
*Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
* Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Alphabetical List ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== A == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for genomics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== B == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from genetic data, using differences in allele frequencies between populations.  BayeScan is based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an island model in which subpopulation allele frequencies are correlated through a common migrant gene pool from which they differ in varying degrees.  The difference in allele frequency between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html] and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== C == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== D == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== E == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N &amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SLURM -l select=1:ncpus=2 &lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== F == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== G == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:* Karle under /usr/bin/gnuplot&lt;br /&gt;
:* Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/  gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GenomePop2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== H ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&lt;br /&gt;
HOOMD&#039;s object-oriented design patterns make it both versatile and expandable. Various types of potentials, integration methods&lt;br /&gt;
and file formats are currently supported, and more are added with each release. The code is available and open source, so anyone&lt;br /&gt;
can write a plugin or change the source to add additional functionality.  Simulations are configured and run using simple python&lt;br /&gt;
scripts, allowing complete control over the force field choice, integrator, all parameters, how many time steps are run, etc.&lt;br /&gt;
The scripting system is designed to be as simple as possible to the non-programmer.&lt;br /&gt;
&lt;br /&gt;
The HOOMD development effort is led by the Glotzer group at the University of Michigan, but many groups from different universities&lt;br /&gt;
have contributed code that is now part of the HOOMD main package, see the credits page for the full list. The HOOMD website and&lt;br /&gt;
documentation are available here [http://codeblue.umich.edu/hoomd-blue/about.html]. More information about our installation can be&lt;br /&gt;
found here [[HOOMD]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== I ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== J ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== L ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== M ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
*Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== N ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== O == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenSees&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSees, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSees is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== P == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
* Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Installing Python packages&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Users may install python packages/modules in their own space.  Many packages available in Python repositories can be installed easily with PIP manager, which is available  in any of Anaconda and Miniconda builds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Users must remember that using PIP without first loading the module for python will cause the installed modules to match system python on login node only. However the python interpreter available (after login module) on all nodes is installed in /share/usr/compilers/python space. Thus when installing packages in user space it is very important to follow the procedure outlined below. The given example demonstrates how users can install package &amp;quot;guppy&amp;quot; in their own space:&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Anaconda build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/2.7.13_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Anaconda build&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/3.6.0_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Miniconda&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda2&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Miniconda 3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda3&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check if the package is properly installed type:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip list | grep guppy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Q == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== R == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot; &amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== S == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:* data entry, retrieval, management, and mining&lt;br /&gt;
:* report writing and graphics&lt;br /&gt;
:* statistical analysis&lt;br /&gt;
:* business planning, forecasting, and decision support&lt;br /&gt;
:* operations research and project management&lt;br /&gt;
:* quality improvement&lt;br /&gt;
:* applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== T == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== U == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== V == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== W == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== X == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=159</id>
		<title>Applications</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=159"/>
		<updated>2022-11-07T18:11:07Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div class=&amp;quot;noautonum&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Application&lt;br /&gt;
!Installed Version&lt;br /&gt;
!Current Version&lt;br /&gt;
!Dependencies&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
__NOTOC__&amp;lt;/div&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;This is an index of available applications sorted by their academic relevance, as well as alphabetically.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For information about using modules to run your applications go to [[Using Modules To Run Your Applications]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class= &amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Computational Physics and Computational Chemistry == &lt;br /&gt;
Applications in this section use classical mechanics, quantum mechanics and thermodynamics and are applied in simulation studies of fundamental properties of atoms, molecules, and chemical reactions.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Computational Biology == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for ‘omics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from &lt;br /&gt;
genetic data, using differences in allele frequencies between populations.  BayeScan is &lt;br /&gt;
based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an&lt;br /&gt;
island model in which subpopulation allele frequencies are correlated through a common &lt;br /&gt;
migrant gene pool from which they differ in varying degrees.  The difference in allele frequency &lt;br /&gt;
between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios &lt;br /&gt;
where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html]&lt;br /&gt;
and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our &lt;br /&gt;
installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the &lt;br /&gt;
European Bioinformatics Institute, Cambridge, UK.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[VELVET]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Computational Genomics, Proteonics, Microbiomics, Genetics ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --&amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=2&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GENOMEPOP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Math, Engineering, Computer Science == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenSEES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSEES, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSEES is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Economics, Business, Statistics, Analytics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot; &amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R-devel&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
R is a language and environment for statistical computing and graphics. R-devel provides both core R userspace and all R development components.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:* data entry, retrieval, management, and mining&lt;br /&gt;
:* report writing and graphics&lt;br /&gt;
:* statistical analysis&lt;br /&gt;
:* business planning, forecasting, and decision support&lt;br /&gt;
:* operations research and project management&lt;br /&gt;
:* quality improvement&lt;br /&gt;
:* applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== General Development Systems ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Coming soon.&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Tools, Libraries, Compilers ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:* Karle under /usr/bin/gnuplot&lt;br /&gt;
:* Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/  gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
*Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
* Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Alphabetical List ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== A == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for genomics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== B == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from genetic data, using differences in allele frequencies between populations.  BayeScan is based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an island model in which subpopulation allele frequencies are correlated through a common migrant gene pool from which they differ in varying degrees.  The difference in allele frequency between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html] and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== C == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== D == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== E == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N &amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SLURM -l select=1:ncpus=2 &lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== F == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== G == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:* Karle under /usr/bin/gnuplot&lt;br /&gt;
:* Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/  gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GenomePop2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== H ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&lt;br /&gt;
HOOMD&#039;s object-oriented design patterns make it both versatile and expandable. Various types of potentials, integration methods&lt;br /&gt;
and file formats are currently supported, and more are added with each release. The code is available and open source, so anyone&lt;br /&gt;
can write a plugin or change the source to add additional functionality.  Simulations are configured and run using simple python&lt;br /&gt;
scripts, allowing complete control over the force field choice, integrator, all parameters, how many time steps are run, etc.&lt;br /&gt;
The scripting system is designed to be as simple as possible to the non-programmer.&lt;br /&gt;
&lt;br /&gt;
The HOOMD development effort is led by the Glotzer group at the University of Michigan, but many groups from different universities&lt;br /&gt;
have contributed code that is now part of the HOOMD main package, see the credits page for the full list. The HOOMD website and&lt;br /&gt;
documentation are available here [http://codeblue.umich.edu/hoomd-blue/about.html]. More information about our installation can be&lt;br /&gt;
found here [[HOOMD]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== I ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== J ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== L ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== M ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
*Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== N ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== O == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenSees&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSees, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSees is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== P == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
* Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Installing Python packages&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Users may install python packages/modules in their own space.  Many packages available in Python repositories can be installed easily with PIP manager, which is available  in any of Anaconda and Miniconda builds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Users must remember that using PIP without first loading the module for python will cause the installed modules to match system python on login node only. However the python interpreter available (after login module) on all nodes is installed in /share/usr/compilers/python space. Thus when installing packages in user space it is very important to follow the procedure outlined below. The given example demonstrates how users can install package &amp;quot;guppy&amp;quot; in their own space:&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Anaconda build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/2.7.13_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Anaconda build&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/3.6.0_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Miniconda&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda2&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Miniconda 3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda3&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check if the package is properly installed type:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip list | grep guppy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Q == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== R == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot; &amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== S == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:* data entry, retrieval, management, and mining&lt;br /&gt;
:* report writing and graphics&lt;br /&gt;
:* statistical analysis&lt;br /&gt;
:* business planning, forecasting, and decision support&lt;br /&gt;
:* operations research and project management&lt;br /&gt;
:* quality improvement&lt;br /&gt;
:* applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== T == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== U == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== V == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== W == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== X == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=158</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=158"/>
		<updated>2022-11-07T18:08:11Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:CUNY-HPCC-HEADER-LOGO.jpg]]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the&lt;br /&gt;
campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314.  HPCC&lt;br /&gt;
goals are to: &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Create opportunities for the CUNY research community to develop new partnerships with the government and private sectors; and&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Data Storage and Management System (&#039;&#039;&#039;DSMS&#039;&#039;&#039;) which is mounted only on login node(s) of all servers. Consequently, no jobs can be started directly from &#039;&#039;&#039;DSMS&#039;&#039;&#039; storage.  Instead, all jobs must be submitted  from  a separate (fast but small) &#039;&#039;&#039;/scratch&#039;&#039;&#039; file system mounted on all computational nodes and on all login nodes.  As the name suggests, the &#039;&#039;&#039;/scratch&#039;&#039;&#039; file system is not  home directory  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         [[Image:HPCC_structure.png|frameless|900x900px]]&lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  The deployed systems include:  distributed memory (also referred to as “cluster”) computers, symmetric multiprocessor (also referred as SMP) and shared memory (also reffred as NUMA machines).  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Computational Systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU. Currently, HPCC operates 3 SMP servers named &#039;&#039;&#039;Math, Cryo &#039;&#039;&#039; and &#039;&#039;&#039;Karle&#039;&#039;&#039;. Karle is a server which does not have GPU and is used for visualizations, visual analytics  and interactive MATLAB/Mathematica jobs. &#039;&#039;&#039;Math&#039;&#039;&#039; is a condominium server without GPU as well. Cryo (CPU+GPU server) is  specialized server designed to support large scale multi-core multi-GPU jobs. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprizing a  set of SMP servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. Each SMP member of the cluster is called a node. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  The main cluster at HPCC is a hybrid (CPU+GPU) cluster called &#039;&#039;&#039;Penzias&#039;&#039;&#039;.  Sixty six (66) of Penzias nodes have 2 x GPU K20m, and the 3 fat nodes (nodes with large number of CPU-cores and memory) of the cluster do not have GPU.   In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; server called &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This server does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN&#039;&#039;&#039;) is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the systems available at the HPC Center.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!System&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |MHN&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, XSM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|2&lt;br /&gt;
| colspan=&amp;quot;6&amp;quot; | -&lt;br /&gt;
| -&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Sandy Bridge, HL = Haswell, IB = Ivy Bridge, SL = Skylake&lt;br /&gt;
&lt;br /&gt;
==Partitions and jobs==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in partitions. The main partition is called production. This is routing partition which distributes the jobs in several sub-partitions depend on job’s requirements. Thus the serial job submitted in &#039;&#039;&#039;production&#039;&#039;&#039; will land in &#039;&#039;&#039;partsequential&#039;&#039;&#039; partition.  No SLURM Pro scripts should be ever used and all existing SLURM scripts must be converted to SLURM before use. The table below shows the limitations of the partitions.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
|-&lt;br /&gt;
|production&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partedu&lt;br /&gt;
|16&lt;br /&gt;
|2&lt;br /&gt;
|216&lt;br /&gt;
|72 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partmath&lt;br /&gt;
|128&lt;br /&gt;
|128&lt;br /&gt;
|128&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partmatlab&lt;br /&gt;
|1972&lt;br /&gt;
|50&lt;br /&gt;
|1972&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partdev&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|4 Hours&lt;br /&gt;
|}&lt;br /&gt;
o	&#039;&#039;&#039;production&#039;&#039;&#039; is the main partition with assigned resources across all servers (except Math and Cryo).It is routing partition so the actual job(s) will be placed in proper sub-partition automatically. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
 &lt;br /&gt;
o	&#039;&#039;&#039;partedu&#039;&#039;&#039;  partition is only for education. Assigned resources are on educational server Herbert. Partedu is accessible only to students (graduate and/or undergraduate) and their professors who are registered for a class supported by HPCC. Access to this partition is limited by the duration of the class. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partmatlab&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. Note however that parallel toolbox programs  can be submitted via production partition, but only as thread parallel jobs. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
The second and fourth Tuesday mornings in the month from 8:00AM to 12PM are normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  &amp;lt;br/ &amp;gt;&lt;br /&gt;
Unplanned maintenance to remedy system related problems may be scheduled as needed.  Reasonable attempts will be made to inform users running on those systems when these needs arise.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are encouraged to read this Wiki carefully.  In particular, the sections on compiling and running&lt;br /&gt;
parallel programs, and the section on the SLURM batch queueing system will give you the essential&lt;br /&gt;
knowledge needed to use the CUNY HPC Center systems.  We have strived to maintain the most uniform&lt;br /&gt;
user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  Still, there are some differences, particularly with the SGI (ANDY) and Cray (SALK)&lt;br /&gt;
systems.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly&lt;br /&gt;
schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit&lt;br /&gt;
is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and&lt;br /&gt;
architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the&lt;br /&gt;
CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures&lt;br /&gt;
at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
An old version of the user manual can be downloaded at: http://cunyhpc.csi.cuny.edu/publications/User_Manual.pdf.  Note that this manual provides SLURM batch  scripts as examples. Currently CUNY-HPCC uses SLURM so users must check the brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=157</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=157"/>
		<updated>2022-11-07T17:40:51Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:CUNY-HPCC-HEADER-LOGO.jpg]]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the&lt;br /&gt;
campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314.  HPCC&lt;br /&gt;
goals are to: &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Create opportunities for the CUNY research community to develop new partnerships with the government and private sectors; and&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Data Storage and Management System (&#039;&#039;&#039;DSMS&#039;&#039;&#039;) which is mounted only on login node(s) of all servers. Consequently, no jobs can be started directly from &#039;&#039;&#039;DSMS&#039;&#039;&#039; storage.  Instead, all jobs must be submitted  from  a separate (fast but small) &#039;&#039;&#039;/scratch&#039;&#039;&#039; file system mounted on all computational nodes and on all login nodes.  As the name suggests, the &#039;&#039;&#039;/scratch&#039;&#039;&#039; file system is not  home directory  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         [[Image:HPCC_structure.png|frameless|900x900px]]&lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  The deployed systems include:  distributed memory (also referred to as “cluster”) computers, symmetric multiprocessor (also referred as SMP) and shared memory (also reffred as NUMA machines).  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Computational Systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU. Currently, HPCC operates 3 SMP servers named &#039;&#039;&#039;Math, Cryo &#039;&#039;&#039; and &#039;&#039;&#039;Karle&#039;&#039;&#039;. Karle is a server which does not have GPU and is used for visualizations, visual analytics  and interactive MATLAB/Mathematica jobs. &#039;&#039;&#039;Math&#039;&#039;&#039; is a condominium server without GPU as well. Cryo (CPU+GPU server) is  specialized server designed to support large scale multi-core multi-GPU jobs. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprizing a  set of SMP servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. Each SMP member of the cluster is called a node. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  The main cluster at HPCC is a hybrid (CPU+GPU) cluster called &#039;&#039;&#039;Penzias&#039;&#039;&#039;.  Sixty six (66) of Penzias nodes have 2 x GPU K20m, and the 3 fat nodes (nodes with large number of CPU-cores and memory) of the cluster do not have GPU.   In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; server called &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This server does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN&#039;&#039;&#039;) is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the systems available at the HPC Center.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!System&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores/node &amp;amp; GPU&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Multi-core Processor&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
2xK20m GPU, PCIe&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|Sand Bridge, EP 2.20 GHz&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Haswell, 2.30 GHz&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|Ivy Bridge, 3 GHz&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40 8xV100 (32GB) GPU, XSM&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|Skylake, 2.40 GHz&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Skylake, 2.10 GHz&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
2xV100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|Haswell, 2.30 GHz&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|2&lt;br /&gt;
| colspan=&amp;quot;5&amp;quot; rowspan=&amp;quot;2&amp;quot; |NA&lt;br /&gt;
|-&lt;br /&gt;
|MHN&lt;br /&gt;
|Login Nodes&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Partitions and jobs==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in partitions. The main partition is called production. This is routing partition which distributes the jobs in several sub-partitions depend on job’s requirements. Thus the serial job submitted in &#039;&#039;&#039;production&#039;&#039;&#039; will land in &#039;&#039;&#039;partsequential&#039;&#039;&#039; partition.  No SLURM Pro scripts should be ever used and all existing SLURM scripts must be converted to SLURM before use. The table below shows the limitations of the partitions.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
|-&lt;br /&gt;
|production&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partedu&lt;br /&gt;
|16&lt;br /&gt;
|2&lt;br /&gt;
|216&lt;br /&gt;
|72 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partcryo&lt;br /&gt;
|40&lt;br /&gt;
|40&lt;br /&gt;
|40&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partmath&lt;br /&gt;
|128&lt;br /&gt;
|128&lt;br /&gt;
|128&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partmatlab&lt;br /&gt;
|1972&lt;br /&gt;
|50&lt;br /&gt;
|1972&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partdev&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|4 Hours&lt;br /&gt;
|}&lt;br /&gt;
o	&#039;&#039;&#039;production&#039;&#039;&#039; is the main partition with assigned resources across all servers (except Math and Cryo).It is routing partition so the actual job(s) will be placed in proper sub-partition automatically. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
 &lt;br /&gt;
o	&#039;&#039;&#039;partedu&#039;&#039;&#039;  partition is only for education. Assigned resources are on educational server Herbert. Partedu is accessible only to students (graduate and/or undergraduate) and their professors who are registered for a class supported by HPCC. Access to this partition is limited by the duration of the class. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partcryo&#039;&#039;&#039; is partition used to start jobs on CRYO server. Users whose projects require and/or benefit from availability  of 8 GPU interconnected via SXM interface (not PCIe) must apply for access to this partition at hpchelp@csi.cuny.edu. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partmatlab&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. Note however that parallel toolbox programs  can be submitted via production partition, but only as thread parallel jobs. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
The second and fourth Tuesday mornings in the month from 8:00AM to 12PM are normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  &amp;lt;br/ &amp;gt;&lt;br /&gt;
Unplanned maintenance to remedy system related problems may be scheduled as needed.  Reasonable attempts will be made to inform users running on those systems when these needs arise.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are encouraged to read this Wiki carefully.  In particular, the sections on compiling and running&lt;br /&gt;
parallel programs, and the section on the SLURM batch queueing system will give you the essential&lt;br /&gt;
knowledge needed to use the CUNY HPC Center systems.  We have strived to maintain the most uniform&lt;br /&gt;
user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  Still, there are some differences, particularly with the SGI (ANDY) and Cray (SALK)&lt;br /&gt;
systems.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly&lt;br /&gt;
schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit&lt;br /&gt;
is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and&lt;br /&gt;
architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the&lt;br /&gt;
CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures&lt;br /&gt;
at formal classes throughout the CUNY campuses.  &lt;br /&gt;
&lt;br /&gt;
Users with further questions or requiring immediate assistance in use of the systems should create a ticket using their HPC account login at:&lt;br /&gt;
&lt;br /&gt;
   [https://hpchelp.csi.cuny.edu hpchelp.csi.cuny.edu]&lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
An old version of the user manual can be downloaded at: http://cunyhpc.csi.cuny.edu/publications/User_Manual.pdf.  Note that this manual provides SLURM batch  scripts as examples. Currently CUNY-HPCC uses SLURM so users must check the brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=File:HPCC_structure.png&amp;diff=156</id>
		<title>File:HPCC structure.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=File:HPCC_structure.png&amp;diff=156"/>
		<updated>2022-11-07T17:37:46Z</updated>

		<summary type="html">&lt;p&gt;James: James uploaded a new version of File:HPCC structure.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=155</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=155"/>
		<updated>2022-11-07T17:35:37Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:CUNY-HPCC-HEADER-LOGO.jpg]]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the&lt;br /&gt;
campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314.  HPCC&lt;br /&gt;
goals are to: &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Create opportunities for the CUNY research community to develop new partnerships with the government and private sectors; and&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Data Storage and Management System (&#039;&#039;&#039;DSMS&#039;&#039;&#039;) which is mounted only on login node(s) of all servers. Consequently, no jobs can be started directly from &#039;&#039;&#039;DSMS&#039;&#039;&#039; storage.  Instead, all jobs must be submitted  from  a separate (fast but small) &#039;&#039;&#039;/scratch&#039;&#039;&#039; file system mounted on all computational nodes and on all login nodes.  As the name suggests, the &#039;&#039;&#039;/scratch&#039;&#039;&#039; file system is not  home directory  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
                                   [[Image:HPCC_structure.png]]&lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  The deployed systems include:  distributed memory (also referred to as “cluster”) computers, symmetric multiprocessor (also referred as SMP) and shared memory (also reffred as NUMA machines).  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Computational Systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU. Currently, HPCC operates 3 SMP servers named &#039;&#039;&#039;Math, Cryo &#039;&#039;&#039; and &#039;&#039;&#039;Karle&#039;&#039;&#039;. Karle is a server which does not have GPU and is used for visualizations, visual analytics  and interactive MATLAB/Mathematica jobs. &#039;&#039;&#039;Math&#039;&#039;&#039; is a condominium server without GPU as well. Cryo (CPU+GPU server) is  specialized server designed to support large scale multi-core multi-GPU jobs. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprizing a  set of SMP servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. Each SMP member of the cluster is called a node. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  The main cluster at HPCC is a hybrid (CPU+GPU) cluster called &#039;&#039;&#039;Penzias&#039;&#039;&#039;.  Sixty six (66) of Penzias nodes have 2 x GPU K20m, and the 3 fat nodes (nodes with large number of CPU-cores and memory) of the cluster do not have GPU.   In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; server called &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This server does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN&#039;&#039;&#039;) is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the systems available at the HPC Center.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!System&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores/node &amp;amp; GPU&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Multi-core Processor&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
2xK20m GPU, PCIe&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|Sand Bridge, EP 2.20 GHz&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Haswell, 2.30 GHz&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|Ivy Bridge, 3 GHz&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40 8xV100 (32GB) GPU, XSM&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|Skylake, 2.40 GHz&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Skylake, 2.10 GHz&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
2xV100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|Haswell, 2.30 GHz&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|2&lt;br /&gt;
| colspan=&amp;quot;5&amp;quot; rowspan=&amp;quot;2&amp;quot; |NA&lt;br /&gt;
|-&lt;br /&gt;
|MHN&lt;br /&gt;
|Login Nodes&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Partitions and jobs==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in partitions. The main partition is called production. This is routing partition which distributes the jobs in several sub-partitions depend on job’s requirements. Thus the serial job submitted in &#039;&#039;&#039;production&#039;&#039;&#039; will land in &#039;&#039;&#039;partsequential&#039;&#039;&#039; partition.  No SLURM Pro scripts should be ever used and all existing SLURM scripts must be converted to SLURM before use. The table below shows the limitations of the partitions.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
|-&lt;br /&gt;
|production&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partedu&lt;br /&gt;
|16&lt;br /&gt;
|2&lt;br /&gt;
|216&lt;br /&gt;
|72 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partcryo&lt;br /&gt;
|40&lt;br /&gt;
|40&lt;br /&gt;
|40&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partmath&lt;br /&gt;
|128&lt;br /&gt;
|128&lt;br /&gt;
|128&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partmatlab&lt;br /&gt;
|1972&lt;br /&gt;
|50&lt;br /&gt;
|1972&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partdev&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|4 Hours&lt;br /&gt;
|}&lt;br /&gt;
o	&#039;&#039;&#039;production&#039;&#039;&#039; is the main partition with assigned resources across all servers (except Math and Cryo).It is routing partition so the actual job(s) will be placed in proper sub-partition automatically. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
 &lt;br /&gt;
o	&#039;&#039;&#039;partedu&#039;&#039;&#039;  partition is only for education. Assigned resources are on educational server Herbert. Partedu is accessible only to students (graduate and/or undergraduate) and their professors who are registered for a class supported by HPCC. Access to this partition is limited by the duration of the class. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partcryo&#039;&#039;&#039; is partition used to start jobs on CRYO server. Users whose projects require and/or benefit from availability  of 8 GPU interconnected via SXM interface (not PCIe) must apply for access to this partition at hpchelp@csi.cuny.edu. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partmatlab&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. Note however that parallel toolbox programs  can be submitted via production partition, but only as thread parallel jobs. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
The second and fourth Tuesday mornings in the month from 8:00AM to 12PM are normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  &amp;lt;br/ &amp;gt;&lt;br /&gt;
Unplanned maintenance to remedy system related problems may be scheduled as needed.  Reasonable attempts will be made to inform users running on those systems when these needs arise.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are encouraged to read this Wiki carefully.  In particular, the sections on compiling and running&lt;br /&gt;
parallel programs, and the section on the SLURM batch queueing system will give you the essential&lt;br /&gt;
knowledge needed to use the CUNY HPC Center systems.  We have strived to maintain the most uniform&lt;br /&gt;
user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  Still, there are some differences, particularly with the SGI (ANDY) and Cray (SALK)&lt;br /&gt;
systems.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly&lt;br /&gt;
schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit&lt;br /&gt;
is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and&lt;br /&gt;
architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the&lt;br /&gt;
CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures&lt;br /&gt;
at formal classes throughout the CUNY campuses.  &lt;br /&gt;
&lt;br /&gt;
Users with further questions or requiring immediate assistance in use of the systems should create a ticket using their HPC account login at:&lt;br /&gt;
&lt;br /&gt;
   [https://hpchelp.csi.cuny.edu hpchelp.csi.cuny.edu]&lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
An old version of the user manual can be downloaded at: http://cunyhpc.csi.cuny.edu/publications/User_Manual.pdf.  Note that this manual provides SLURM batch  scripts as examples. Currently CUNY-HPCC uses SLURM so users must check the brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=File:HPCC_structure.png&amp;diff=154</id>
		<title>File:HPCC structure.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=File:HPCC_structure.png&amp;diff=154"/>
		<updated>2022-11-07T17:35:03Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Data_Storage_and_Management_System&amp;diff=153</id>
		<title>Data Storage and Management System</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Data_Storage_and_Management_System&amp;diff=153"/>
		<updated>2022-11-07T17:33:06Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Data Storage and Management System (DSMS)=&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
Key features of the &#039;&#039;&#039;DSMS&#039;&#039;&#039; system include:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;User&#039;&#039;&#039; home directories in a standard Unix file system called /global/u.&lt;br /&gt;
:•	Enhanced parallel scratch space on the HPC systems.&lt;br /&gt;
:•	&#039;&#039;&#039;Project&#039;&#039;&#039; directories in an Integrated Rule-Oriented Data-management System (iRODS) managed resource.  Project directories exist in a “virtual file space” called &#039;&#039;&#039;cunyZone&#039;&#039;&#039; which contains a resource called &#039;&#039;&#039;Storage Resource 1 (SR1)&#039;&#039;&#039;.    For the purpose of this document, we will use the terminology SR1 to describe &#039;&#039;&#039;Project file space.&#039;&#039;&#039;&lt;br /&gt;
:•	Automated backups.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;DSMS&#039;&#039;&#039; is the HPC Center’s primary file system and is accessible from all existing HPC systems, except for &#039;&#039;&#039;HERBERT&#039;&#039;&#039; . It will similarly be accessible from all future HPC systems.   &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;DSMS&#039;&#039;&#039; provides a 3-level data storage infrastructure: - &#039;&#039;&#039;HOME&#039;&#039;&#039; filesystem,  &#039;&#039;&#039;SCRATCH&#039;&#039;&#039; filesystems, &#039;&#039;&#039;SR1&#039;&#039;&#039; (long-tern storage resource)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DSMS&#039;&#039;&#039; features are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==&amp;quot;Home&amp;quot; directories are on &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; is a standard Unix file system that holds the home directories of individual users. When users request and are granted an allocation of HPC resources, they are assigned a &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; and a 50 GB allocation of disk space for home directories on &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;. These &#039;&#039;&#039;home&#039;&#039;&#039; directories are on the &#039;&#039;&#039;DSMS&#039;&#039;&#039;, not on the HPC systems, but can be accessed from any Center system. All home directories are backed up on weekly basis.&lt;br /&gt;
&lt;br /&gt;
==&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;==&lt;br /&gt;
Disk storage on the HPC systems is used only for &#039;&#039;&#039;scratch&#039;&#039;&#039; files.  &#039;&#039;&#039;scratch&#039;&#039;&#039; files are temporary and are &#039;&#039;&#039;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;not backed up&amp;lt;/font color&amp;gt;&#039;&#039;&#039;.  &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; is used by jobs queued for or in execution.  Output from jobs may temporarily be located in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039;.  &lt;br /&gt;
&lt;br /&gt;
In order to submit a job for execution, a user must &#039;&#039;&#039;stage&#039;&#039;&#039; or &#039;&#039;&#039;mount&#039;&#039;&#039; the files required by the job to &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; using UNIX commands and/or from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039; using &#039;&#039;&#039;iRODS&#039;&#039;&#039; commands.&lt;br /&gt;
&lt;br /&gt;
Files in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; on a system are &#039;&#039;&#039;automatically purged&#039;&#039;&#039; when (1) usage reaches 70% of available space, or (2) file residence on scratch exceeds two weeks, whichever occurs first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==“Project” directories==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;“Project”&#039;&#039;&#039; directories are managed through &#039;&#039;&#039;iRODS&#039;&#039;&#039; and accessible through iRODS commands, not standard UNIX commands.   In iRODS terminology, a “collection” is the equivalent of “directory”.&lt;br /&gt;
&lt;br /&gt;
A &#039;&#039;&#039;“Project”&#039;&#039;&#039; is an activity that usually involves multiple users and/or many individual data files.  A &#039;&#039;&#039;“Project”&#039;&#039;&#039; is normally led by a “Principal Investigator” (PI), who is a faculty member or a research scientist.   The PI is the individual responsible to the University or a granting agency for the “Project”.  The PI has overall responsibility for “Project” data and “Project” data management. To establish a Project, the PI completes and submits the online “Project Application Form”.&lt;br /&gt;
&lt;br /&gt;
Additional information on the &#039;&#039;&#039;DSMS&#039;&#039;&#039; is available in Section 4 of the User Manual &amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.csi.cuny.edu/cunyhpc/pdf/User_Manual.pdf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Typical Workflow==&lt;br /&gt;
Typical workflows in are described below:&lt;br /&gt;
&lt;br /&gt;
1. Copying files from a user’s home directory or from &#039;&#039;&#039;SR1&#039;&#039;&#039; to &#039;&#039;&#039;SCRATCH&#039;&#039;&#039;.&amp;lt;br /&amp;gt;&lt;br /&gt;
If working with &#039;&#039;&#039;HOME&#039;&#039;&#039;:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   cd /scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
   mkdir &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt; &amp;amp;&amp;amp; cd &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt;&lt;br /&gt;
   cp /global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font&amp;gt;/a.out ./&lt;br /&gt;
   cp /global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mydatafile &amp;lt;/font&amp;gt;./&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If working with &#039;&#039;&#039;SR1&#039;&#039;&#039;:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   cd /scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
   mkdir &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt; &amp;amp;&amp;amp; cd &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt;&lt;br /&gt;
   iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/a.out &lt;br /&gt;
   iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mydatafile&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Prepare SLURM job script. Typical SLURM sript is similar to the following:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   #!/bin/bash &lt;br /&gt;
   #SBATCH --partition production &lt;br /&gt;
   #SBATCH -J test &lt;br /&gt;
   #SBATCH --nodes 1 &lt;br /&gt;
   #SBATCH --ntasks 8 &lt;br /&gt;
   #SBATCH --mem 4000&lt;br /&gt;
   echo &amp;quot;Starting…&amp;quot; &lt;br /&gt;
&lt;br /&gt;
   cd $SLURM_SUBMIT_DIR&lt;br /&gt;
   mpirun -np 4 ./a.out ./&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mydatafile&amp;lt;/font color&amp;gt; &amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myoutputs&amp;lt;/font color&amp;gt;&lt;br /&gt;
   echo &amp;quot;Done…&amp;quot;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
Your SLURM may be different depending on your needs. Read section Submitting Jobs for a reference.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Run the job &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   sbatch ./&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_script&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Once job is finished, clean up &#039;&#039;&#039;SCRATCH&#039;&#039;&#039; and store outputs in your user home directory or in &#039;&#039;&#039;SR1&#039;&#039;&#039;.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If working with &#039;&#039;&#039;HOME&#039;&#039;&#039;:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   mv ./myoutputs /global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/.&lt;br /&gt;
   cd ../&lt;br /&gt;
   rm -rf &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If working with &#039;&#039;&#039;SR1&#039;&#039;&#039;:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   iput ./myoutputs &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/. &lt;br /&gt;
   cd ../&lt;br /&gt;
   rm -rf &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. If output files are stored in &#039;&#039;&#039;SR1&#039;&#039;&#039; tag them with metadata.&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   imeta addw -d myoutput zvalue 15 meters&lt;br /&gt;
   imeta addw -d myoutput colorLabel RED&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== iRODS ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;iRODS&#039;&#039;&#039; is the integrated Rule-Oriented Data-management System, a&lt;br /&gt;
community-driven, open source, data grid software solution. &#039;&#039;&#039;iRODS&#039;&#039;&#039; is&lt;br /&gt;
designed to abstract data services from data storage hardware and&lt;br /&gt;
provide users with hardware-agnostic way to manipulate data. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;iRODS&#039;&#039;&#039; is a primary tool that is used by the CUNY HPCC users to&lt;br /&gt;
seamlessly access 1PB storage resource (further referenced as &#039;&#039;&#039;SR1&#039;&#039;&#039;&lt;br /&gt;
here) from any of the HPCC&#039;s computational systems.&lt;br /&gt;
&lt;br /&gt;
Access to &#039;&#039;&#039;SR1&#039;&#039;&#039; is provided via so-called &#039;&#039;&#039;i-commands&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;iinit&lt;br /&gt;
ils&lt;br /&gt;
imv&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comprehesive list of i-commands with detailed description can be&lt;br /&gt;
obtained at [https://wiki.irods.org/index.php/icommands iRODS wiki].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To obtain quick help on any of the commands while being logged into&lt;br /&gt;
any of the HPCC&#039;s machines type &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;i-command -h&#039;&#039;&#039;&amp;lt;/font&amp;gt;. For example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ils -h&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
Following is the list of some of the most relevant &#039;&#039;&#039;i-commands&#039;&#039;&#039;:&lt;br /&gt;
  &lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iinit&amp;lt;/font&amp;gt;&#039;&#039;&#039; -- Initialize session and store your password in a scrambled form for automatic use by other icommands.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iput&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Store a file&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iget&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Get a file&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;imkdir&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like mkdir, make an iRODS collection (similar to a directory or Windows folder)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ichmod&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like chmod, allow (or later restrict) access to your data objects by other users.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;icp&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like cp or rcp, copy an iRODS data object&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;irm&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like rm, remove an iRODS data object&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ils&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like ls, list iRODS data objects (files) and collections (directories)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ipwd&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like pwd, print the iRODS current working directory&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;icd&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like cd, change the iRODS current working directory&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ichksum&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Checksum one or more data-object or collection from iRODS space.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;imv&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Moves/renames an irods data-object or collection.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;irmtrash&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Remove one or more data-object or collection from a RODS trash bin.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;imeta&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Add, remove, list, or query user-defined Attribute-Value-Unit triplets metadata&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iquest&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Query (pose a question to) the ICAT, via a SQL-like interface&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Before using any of the i-commands users need to identify themselves to the iRODS server running command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# iinit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and providing HPCC&#039;s password. &lt;br /&gt;
&lt;br /&gt;
Typical workflow that involves operations on files stored in SR1&lt;br /&gt;
include storing/getting data to and from SR1, tagging data with &lt;br /&gt;
metadata, searching for data, sharing (setting permissions). &lt;br /&gt;
&lt;br /&gt;
==== Storing data to SR1 ====&lt;br /&gt;
 &lt;br /&gt;
1. Create &#039;&#039;&#039;iRODS&#039;&#039;&#039; directory (aka &#039;collection&#039;):&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   # imkdir &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
2. Store all files &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;myfile*&#039;&#039;&#039;&#039;&amp;lt;/font face&amp;gt; into this directory (collection):&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   # iput -r &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt; myfile* myProject&amp;lt;/font color&amp;gt;/.&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
3. Verify that files are stored:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   # ils&lt;br /&gt;
   /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;:&lt;br /&gt;
   C- /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
   # ils &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
   /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;:&lt;br /&gt;
      myfile1&lt;br /&gt;
      myfile2&lt;br /&gt;
      myfile3&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
Symbol &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;C-&#039;&amp;lt;/font&amp;gt;&#039;&#039;&#039; in the beginning of output of &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;ils&#039;&amp;lt;/font&amp;gt;&#039;&#039;&#039; shows that listed item is a collection.&lt;br /&gt;
&lt;br /&gt;
4. Combining &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;ils&#039;, &#039;imkdir&#039;, &#039;iput&#039;, &#039;icp&#039;, &#039;ipwd&#039;, &#039;imv&#039;&#039;&#039;&#039;&amp;lt;/font&amp;gt; user can create iRODS directories and store files in them similarly to what is normally done with UNIX commands &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;ls&#039;, &#039;mkdir&#039;, &#039;cp&#039;, &#039;pwd&#039;, &#039;mv&#039;&#039;&#039;&#039;&amp;lt;/font&amp;gt; etc...&lt;br /&gt;
&lt;br /&gt;
==== Getting data from SR1 ====&lt;br /&gt;
&lt;br /&gt;
1. To copy file from SR1 to current working directory run&lt;br /&gt;
   # iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myfile1&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Now listing current working directory should reveal &#039;&#039;&#039;myfile1&#039;&#039;&#039;:&lt;br /&gt;
   # ls&lt;br /&gt;
   &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myfile1&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Instead of individual files the whole directory (with&lt;br /&gt;
sub-directories) can be copied with &#039;&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;-r&amp;lt;/font&amp;gt;&#039;&#039;&#039;&#039; flag (stands for&lt;br /&gt;
&#039;recursive&#039;)&lt;br /&gt;
   # iget -r &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NOTE: wildcards are not supported, therefore the command below &amp;lt;u&amp;gt;will not work&amp;lt;/u&amp;gt;&lt;br /&gt;
   # iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myfile&amp;lt;/font color&amp;gt;*&lt;br /&gt;
&lt;br /&gt;
=== Tagging data with metadata ===&lt;br /&gt;
   &lt;br /&gt;
iRODS provides users with extremely powerful mechanism of managing&lt;br /&gt;
data with metadata. While working with large datasets it&#039;s&lt;br /&gt;
sometimes easy to forget what is stored in this or the other file.&lt;br /&gt;
Metadata tags help organizing data in a very easy and reliable&lt;br /&gt;
manner.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s tag files from previous example with some metadata:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta add -d myProject/myfile1 zvalue 15 meters&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile1 colorLabel RED&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile1 comment &amp;quot;This is file number 1&amp;quot;&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile2 zvalue 10 meters&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile2 colorLabel RED&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile2 comment &amp;quot;This is file number 2&amp;quot;&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile3 zvalue 15 meters&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile3 colorLabel BLUE&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile3 comment &amp;quot;This is file number 3&amp;quot;&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here we&#039;ve tagged myfile1 with 3 metadata labels:&lt;br /&gt;
&lt;br /&gt;
- zvalue 10 meters&lt;br /&gt;
&lt;br /&gt;
- colorlabel RED&lt;br /&gt;
&lt;br /&gt;
- comment &amp;quot;This is file number 1&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
Similar tags were added to &#039;myfile2&#039; and &#039;myfile3&#039;&lt;br /&gt;
&lt;br /&gt;
Metadata come in form of AVU -- Attribute|Value|Unit. As seen from&lt;br /&gt;
the above examples Unit is not necessary. &lt;br /&gt;
&lt;br /&gt;
Let&#039;s list all metadata assigned to file &#039;myfie1&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta ls -d myProject/myfile1&lt;br /&gt;
AVUs defined for dataObj myProject/myfile1:&lt;br /&gt;
attribute: zvalue&lt;br /&gt;
value: 15&lt;br /&gt;
units: meters&lt;br /&gt;
----&lt;br /&gt;
attribute: colorLabel&lt;br /&gt;
value: RED&lt;br /&gt;
units:&lt;br /&gt;
----&lt;br /&gt;
attribute: comment&lt;br /&gt;
value: This is file number 1&lt;br /&gt;
units:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
To remove an AVU assigned to a file run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta rm -d myProject/myfile1 zvalue 15 meters&lt;br /&gt;
# imeta ls -d myProject/myfile1&lt;br /&gt;
AVUs defined for dataObj myProject/myfile1:&lt;br /&gt;
attribute: colorLabel&lt;br /&gt;
value: RED&lt;br /&gt;
units:&lt;br /&gt;
----&lt;br /&gt;
attribute: comment&lt;br /&gt;
value: This is file number 1&lt;br /&gt;
units:&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# imeta add -d myProject/myfile1 zvalue 15 meters&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Metadata may be assigned to directories as well:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta add -C myProject simulationsPool 1&lt;br /&gt;
# imeta ls -C myProject&lt;br /&gt;
AVUs defined for collection myProject:&lt;br /&gt;
attribute: simulationsPool&lt;br /&gt;
value: 1&lt;br /&gt;
units:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the &#039;-C&#039; key that is used instead of &#039;-d&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Searching for data ===&lt;br /&gt;
&lt;br /&gt;
Power of metadata becomes obvious when data needs to be found in&lt;br /&gt;
large collections. Here is an illustration of how easy this task is&lt;br /&gt;
done with iRODS via imeta queries:&lt;br /&gt;
&lt;br /&gt;
 # imeta qu -d zvalue = 15&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject&lt;br /&gt;
 dataObj: myfile1&lt;br /&gt;
 ----&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject&lt;br /&gt;
 dataObj: myfile3&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We see both files that were tagged with label &#039;zvalue 10 meters&#039;.&lt;br /&gt;
Here is different query:&lt;br /&gt;
 &lt;br /&gt;
 # imeta qu -d colorLabel = RED&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/myProject&lt;br /&gt;
 dataObj: myfile1&lt;br /&gt;
 ----&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject&lt;br /&gt;
 dataObj: myfile2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another powerful mechanism to query data is provided with &#039;iquest&#039;. &lt;br /&gt;
Following is a number of examples that show &#039;iquest&#039; capabilities:&lt;br /&gt;
 &lt;br /&gt;
 iquest &amp;quot;SELECT DATA_NAME, DATA_SIZE WHERE DATA_RESC_NAME like &#039;cuny%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;For %-12.12s size is %s&amp;quot; &amp;quot;SELECT DATA_NAME ,  DATA_SIZE  WHERE COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;SELECT COLL_NAME WHERE COLL_NAME like &#039;/cunyZone/home/%&#039; AND USER_NAME like &#039;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;User %-6.6s has %-5.5s access to file %s&amp;quot; &amp;quot;SELECT USER_NAME,  DATA_ACCESS_NAME, DATA_NAME WHERE COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot; %-5.5s access has been given to user %-6.6s for the file %s&amp;quot; &amp;quot;SELECT DATA_ACCESS_NAME, USER_NAME, DATA_NAME WHERE COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest no-distinct &amp;quot;select META_DATA_ATTR_NAME&amp;quot;&lt;br /&gt;
 iquest  &amp;quot;select COLL_NAME, DATA_NAME WHERE DATA_NAME like &#039;myfile%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;User %-9.9s uses %14.14s bytes in %8.8s files in &#039;%s&#039;&amp;quot; &amp;quot;SELECT USER_NAME, sum(DATA_SIZE),count(DATA_NAME),RESC_NAME&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select sum(DATA_SIZE) where COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select sum(DATA_SIZE) where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select sum(DATA_SIZE), RESC_NAME where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select order_desc(DATA_ID) where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select count(DATA_ID) where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select RESC_NAME where RESC_CLASS_NAME IN (&#039;bundle&#039;,&#039;archive&#039;)&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select DATA_NAME,DATA_SIZE where DATA_SIZE BETWEEN &#039;100000&#039; &#039;100200&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Sharing data ===&lt;br /&gt;
&lt;br /&gt;
Access to the data can be controlled via &#039;ichmod&#039; command. It&#039;s&lt;br /&gt;
behavior is similar to UNIX &#039;chmod&#039; command. For example if there is a&lt;br /&gt;
need to provide user &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&#039;&#039;&#039;&amp;lt;userid&amp;gt;&#039;&#039;&#039;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt; with read access to file&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;myProject/myfile1&#039;&#039;&#039;&amp;lt;/font&amp;gt; execute the following command:&lt;br /&gt;
   ichmod read &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid1&amp;gt;&amp;lt;/font color&amp;gt; myProject/myfile1&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see who has access to a file/directory use:&lt;br /&gt;
   # ils -A myProject/myfile1&lt;br /&gt;
   /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject/myfile1&lt;br /&gt;
   ACL - &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid1&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
   #cunyZone:read object   &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;#cunyZone:own&lt;br /&gt;
&lt;br /&gt;
In the above example user &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid1&amp;gt;&amp;lt;/font color&amp;gt;&#039;&#039;&#039;&amp;lt;/font&amp;gt; has read access to the file and&lt;br /&gt;
user &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; is an owner of the file. &lt;br /&gt;
&lt;br /&gt;
Possible levels of access to a data object are null/read/write/own.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Backups==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Backups.&#039;&#039;&#039;	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; user directories and &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039; Project files are backed up automatically to a remote tape silo system over a fiber optic network.  Backups are performed daily. &lt;br /&gt;
&lt;br /&gt;
If the user deletes a file from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; or &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039;, it will remain on the tape silo system for 30 days, after which it will be deleted and cannot be recovered.   If a user, within the 30 day window finds it necessary to recover a file, the user must expeditiously submit a request to [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu].&lt;br /&gt;
&lt;br /&gt;
Less frequently accessed files are automatically transferred to the HPC Center robotic tape system, freeing up space in the disk storage pool and making it available for more actively used files. The selection criteria for the migration are age and size of a file. If a file is not accessed for 90 days, it may be moved to a tape in the tape library – in fact to two tapes, for backup. This is fully transparent to the user. When a file is needed, the system will copy the file back to the appropriate disk directory. No user action is required.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Data retention and account expiration policy==&lt;br /&gt;
&lt;br /&gt;
Project directories on SR1 are retained as long as the project is active.  The HPC Center will coordinate with the Principal Investigator of the project before deleting a project directory.  If the PI is no longer with CUNY, the HPC Center will coordinate with the PI’s departmental chair or Research Dean, whichever is appropriate.&lt;br /&gt;
&lt;br /&gt;
For user accounts, current user directories under /global/u are retained as long as the account is active.  If a user account is inactive for one year, the HPC Center will attempt to contact the user and request that the data be removed from the system.  If there is no response from the user within three months of the initial notice, or if the user cannot be reached, the user directory will be purged. &lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==DSMS Technical Summary==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!File Space&lt;br /&gt;
!Purpose&lt;br /&gt;
!Accessibility&lt;br /&gt;
!Quota&lt;br /&gt;
!Backups&lt;br /&gt;
!Purges&lt;br /&gt;
|-&lt;br /&gt;
|Scratch:&lt;br /&gt;
/scratch/&amp;lt;userid&amp;gt;&lt;br /&gt;
on *PENZIAS, ANDY, SALK, BOB*&lt;br /&gt;
|High Performance Parallel scratch filesystems. Work area for jobs, datasets, restart files, files to be pre-/post processed. Temporary space for data that will be removed within a short amount of time.&lt;br /&gt;
|Not globally accessible.&lt;br /&gt;
Separate /scratch/&amp;lt;userid&amp;gt; exists on each system. Visible on login and compute nodes of each system and on the data transfer nodes.&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|Files older than 2 weeks are automatically deleted &lt;br /&gt;
OR&lt;br /&gt;
when scratch filesystem reaches 70% utilization&lt;br /&gt;
|-&lt;br /&gt;
|Home:&lt;br /&gt;
/global/u/&amp;lt;userid&amp;gt;&lt;br /&gt;
|User home filespace. Essential data should be stored here, such as user&#039;s source code, documents, and data structures.&lt;br /&gt;
|Globally accessible on the login and on the data transfer nodes through native GPFS or NFS mounts&lt;br /&gt;
|Nominally 50 GB&lt;br /&gt;
|Yes, backed up nightly to tape. If the active copy is deleted, the most recent backup is stored for 30 days.&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Not purged&lt;br /&gt;
|-&lt;br /&gt;
|Project:&lt;br /&gt;
/SR1/&amp;lt;PID&amp;gt;&lt;br /&gt;
|Project space allocations&lt;br /&gt;
|Accessible on the login and on the data transfer nodes. Accessible outside CUNY HPC Center through iRODS.&lt;br /&gt;
|Allocated according to project needs&lt;br /&gt;
|Yes, backed up nightly to tape. If the active copy is deleted, the most recent backup is stored for 30 days and retrievable on request, but the iRODS metadata may be lost.&lt;br /&gt;
|}&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039; is tuned for high bandwidth, redundancy, and resilience.  It is not optimal for handling large quantities of small files. If you need to archive more than a thousand of files on &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039;, please create a single archive using &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar&amp;lt;/font&amp;gt;&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
•	A separate &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; exists on each system.  On PENZIAS, SALK, KARLE, and ANDY, this is a Lustre parallel file system, on HERBERT it is NFS. These &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; directories are visible on the login and compute nodes of the system only and on the data transfer nodes, but are not shared across HPC systems.&lt;br /&gt;
&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; is used as a high performance parallel scratch filesystem, for example, temporary files (e.g. restart files) should be stored here.&lt;br /&gt;
&lt;br /&gt;
•	There are no quotas on &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;, however any files older than 2 weeks are automatically deleted.  Also, a cleanup script is scheduled to run every two weeks or whenever the /scratch disk space utilization exceeds 70%.  Dot-files are generally left intact from these cleanup jobs.&lt;br /&gt;
&lt;br /&gt;
•	/scratch space is available to all users. If the scratch space is exhausted, jobs will not be able to run. Purge any files in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;, which are no longer needed, even before the automatic deletion kicks in.&lt;br /&gt;
&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; directory may be empty when you login, you will need to copy any files required for submitting your jobs (submission scripts, data sets) from &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;/global/u&#039;&#039;&#039;&amp;lt;/font&amp;gt; or from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039;.  Once your jobs complete copy any files you need to keep back to &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; or &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039; and remove all files from /scratch.&lt;br /&gt;
&lt;br /&gt;
•	Do not use &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/tmp&amp;lt;/font&amp;gt;&#039;&#039;&#039; for storing temporary files. The file system where &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/tmp&amp;lt;/font&amp;gt;&#039;&#039;&#039; resides in memory is very small and slow. Files will be regularly deleted by automatic procedures.&lt;br /&gt;
&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; is not backed up and there is no provision for retaining data stored in these directories.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Good data handling practices==&lt;br /&gt;
===DSMS, i.e., /global/u and SR1===&lt;br /&gt;
&lt;br /&gt;
•	The &#039;&#039;&#039;DSMS&#039;&#039;&#039; is not an archive for non-HPC users. It is an archive for users who are processing data at the HPC Center.  “Parking” files on the &#039;&#039;&#039;DSMS&#039;&#039;&#039; as a back-up to local data stores is prohibited.  &lt;br /&gt;
&lt;br /&gt;
•	Do not store more than 1,000 files in a single directory. Store collections of small files into an archive (for example, tar). Note that for every file, a stub of about 4MB is kept on disk even if the rest of the file is migrated to tape, meaning that even migrated files take up some disk space. It also means that files smaller than the stub size are never migrated to tape because that would not make sense.  Storing a large number of small files in a single directory degrades the file system performance. &lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===/scratch===&lt;br /&gt;
&lt;br /&gt;
•	Please regularly remove unwanted files and directories and avoid keeping duplicate copies in multiple locations. File transfer among the HPC Center systems is very fast. It is forbidden to use &amp;quot;touch jobs&amp;quot; to prevent the cleaning policy from automatically deleting your files from the &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; directories. Use &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xmvf&amp;lt;/font&amp;gt;&#039;&#039;&#039;, not &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xvf&amp;lt;/font&amp;gt;&#039;&#039;&#039; to unpack files.   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xmvf&amp;lt;/font&amp;gt;&#039;&#039;&#039; updates the times stamp on the unpacked files.  The &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xvf&amp;lt;/font&amp;gt;&#039;&#039;&#039; command preserves the time stamp from the original file and not the time when the archive was unpacked. Consequently, the automatic deletion mechanism may remove files unpacked by &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar –xvf&amp;lt;/font&amp;gt;&#039;&#039;&#039;, which are only a few days old.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Modules,_Managing_Your_CUNY_HPC_Center_Environment&amp;diff=152</id>
		<title>Modules, Managing Your CUNY HPC Center Environment</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Modules,_Managing_Your_CUNY_HPC_Center_Environment&amp;diff=152"/>
		<updated>2022-10-27T20:15:13Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Modules, Managing Your CUNY HPC Center Environment =&lt;br /&gt;
Modules is a software package that provides for the fast and convenient management of the components of&lt;br /&gt;
a user&#039;s environment via &#039;&#039;&#039;modulefiles&#039;&#039;&#039;.  When executed by the module command each module file fully &lt;br /&gt;
configures the environment for its associated application or application group.  The modules configuration&lt;br /&gt;
language allows for the management of applications environment conflicts and dependencies as well.&lt;br /&gt;
The modules software allows users to load (and unload and reload) an application and/or system environment&lt;br /&gt;
that is specific to their needs and avoids the need to set and manage a large, one-size-fits-all, generic environment&lt;br /&gt;
for everyone at login.  &lt;br /&gt;
&lt;br /&gt;
Modules has been the default approach to managing the user applications environment on SALK, the CUNY HPC&lt;br /&gt;
Center Cray, since its installation in 2011.  By the end of 2012, all non-legacy and future systems at the CUNY HPC &lt;br /&gt;
Center will use modules to manage the user environment instead of generic environmental initialization files stored&lt;br /&gt;
in /etc/profile.d.  The only system that will need to transition from this older approach to the all-modules approach &lt;br /&gt;
will be ANDY.  All new systems, such as Penzias and the new SGI UV2, will come up as modules-based when they &lt;br /&gt;
are ready for production use.  The legacy system, BOB, is currently used almost entirely for Gaussian jobs&lt;br /&gt;
will NOT be reconfigured with the modules software.  Module version 3.2.6 is installed on SALK, and version 3.2.9&lt;br /&gt;
will be the default on the other HPC Center systems.&lt;br /&gt;
&lt;br /&gt;
Using the module package users can easily set a collection of environmental variables that are specific to their&lt;br /&gt;
compilation, parallel programming, and/or application requirements on the HPC Center&#039;s systems. The modules system&lt;br /&gt;
also makes it convenient to advance or regress compiler, parallel programming, or applications versions when defaults&lt;br /&gt;
are found to have bugs or performance issues.  Whatever the task, the modules package can adjust the environment&lt;br /&gt;
in an orderly way altering or setting of such environmental variables as PATH, MANPATH, LD_LIBRARY_PATH, etc.&lt;br /&gt;
and providing some basic descriptive information about the application version being loaded and purpose of the&lt;br /&gt;
modules file through the module help facility.   &lt;br /&gt;
&lt;br /&gt;
In addition to each application-specific modulefile, the module package functions through the use of a collection of&lt;br /&gt;
sub-commands given after the initial module command itself as in &amp;quot;module list&amp;quot; for instance.  All these module sub-&lt;br /&gt;
command are described in detail in the module man page (&amp;quot;man module&amp;quot;), but a list of some of the more important&lt;br /&gt;
and commonly used sub-commands is provided here:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Module sub-commands:&lt;br /&gt;
&lt;br /&gt;
list&lt;br /&gt;
load&lt;br /&gt;
unload&lt;br /&gt;
switch&lt;br /&gt;
avail&lt;br /&gt;
show&lt;br /&gt;
help&lt;br /&gt;
purge&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Modules, Learning by Example ==&lt;br /&gt;
The best way to explain how to use the modules package and its sub-command is to consider some simple&lt;br /&gt;
examples of a typical workflows that involve modules.  Here are two examples.  Again, for a more complete&lt;br /&gt;
description of the modules package please refer to &amp;quot;man module&amp;quot;.&lt;br /&gt;
=== Example 1,  Basic Non-Cray System ===&lt;br /&gt;
Consider the unmodified PATH variable right after login to one of the CUNY HPC Center systems.&lt;br /&gt;
Without any custom or local environmental path settings, it would look something like this with no&lt;br /&gt;
compiler, parallel programming model, or application-specific information in it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; echo $PATH | tr -s &#039;:&#039; &#039;\n&#039;&lt;br /&gt;
/home/username/bin&lt;br /&gt;
/usr/local/bin&lt;br /&gt;
/usr/bin&lt;br /&gt;
/bin&lt;br /&gt;
/usr/bin/X11&lt;br /&gt;
/usr/X11R6/bin&lt;br /&gt;
/usr/games&lt;br /&gt;
/opt/c3/bin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We take note that there appears to be no path to the application that we are interested in running which is Wolfram&#039;s&lt;br /&gt;
Mathematica in this example.  Typing &amp;quot;which math&amp;quot; to find Mathematica (&amp;quot;math&amp;quot; is the command-line name for Mathematica)&lt;br /&gt;
at the terminal yields:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
username@service0:~&amp;gt;  which math&lt;br /&gt;
which: no math in (/home/username/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/opt/c3/bin)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Mathematica executable &amp;quot;math&amp;quot; is not found in the default PATH variabl defined by the system at login. Modules can be&lt;br /&gt;
used to remedy this problem by adding the required path.  To check which module files (if any) are already loaded into&lt;br /&gt;
our environment, we are can type the &amp;quot;module list&amp;quot; sub-command at the terminal prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; module list&lt;br /&gt;
No Modulefiles Currently Loaded.&lt;br /&gt;
username@service0:~&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
No modules loaded.  So the module file for Mathematica has not been loaded and it is no surprise&lt;br /&gt;
that the Mathematica command-line &amp;quot;math&amp;quot; was not found.  The next question is has the HPC Center&lt;br /&gt;
installed Mathematica on this system and created a module file for it?  To find this out we use &lt;br /&gt;
the &amp;quot;module avail&amp;quot; sub-command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; module avail&lt;br /&gt;
---------------------------- /share/apps/modules/default/modulefiles_UserApplications --------------------------------------&lt;br /&gt;
&lt;br /&gt;
adf/2012.01(default)         cesm/1.0.3                   hoomd/0.9.2(default)         ncar/5.2.0_NCL(default)      pgi/12.3(default)&lt;br /&gt;
auto3dem/4.02(default)       cesm/1.0.4(default)          intel/12.1.3.293(default)    nwchem/6.1.1(default)        phoenics/2009(default)&lt;br /&gt;
autodock/4.2.3(default)      cuda/5.0(default)            ls-dyna/6.0.0(default)       octopus/4.0.0(default)       r/2.14.1(default)&lt;br /&gt;
beagle/0.2(default)          gromacs/4.5.5_32bit          mathematica/8.0.4(default)   openmpi/1.5.5_intel(default) wrf/3.4.0(default)&lt;br /&gt;
best/2.2L(default)           gromacs/4.5.5_64bit(default) matlab/R2012a(default)       openmpi/1.5.5_pgi&lt;br /&gt;
&lt;br /&gt;
--------------------------------- /share/apps/modules/default/modulefiles_System -------------------------------------------&lt;br /&gt;
&lt;br /&gt;
module-info   modules       version/3.2.9&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The listing shows all available module files on this system, both those that are user-application related&lt;br /&gt;
and those that are more system related.  As shown in the output, these two types of module files are &lt;br /&gt;
stored in different directories. Looking through the application list, there is a module for Mathematica&lt;br /&gt;
version 8.0.4, which is also happens to be the default.  On this system, the modules package has only&lt;br /&gt;
just been installed, and therefore only one version of each application has been adapted to the module&lt;br /&gt;
system and that version is the default.&lt;br /&gt;
&lt;br /&gt;
The module file that is responsible for setting up correct environment needed to run Mathematica can &lt;br /&gt;
now be loaded:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mathematica&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Because there is only one version and it is the default, there is no need to include the version-specific&lt;br /&gt;
extension to load it.   To explicitly load version 8.0.4 (or any other specific and non-default version)&lt;br /&gt;
one would use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mathematica/8.0.4&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After loading, the environmental PATH variable includes the path to Mathematica:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; echo $PATH | tr -s &#039;:&#039; &#039;\n&#039;&lt;br /&gt;
/home/username/bin&lt;br /&gt;
/usr/local/bin&lt;br /&gt;
/usr/bin&lt;br /&gt;
/bin&lt;br /&gt;
/usr/bin/X11&lt;br /&gt;
/usr/X11R6/bin&lt;br /&gt;
/usr/games&lt;br /&gt;
/opt/c3/bin&lt;br /&gt;
/share/apps/mathematica/8.0.4/Executables&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be verified by rerunning the &amp;quot;which math&amp;quot; command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; which math&lt;br /&gt;
/share/apps/mathematica/8.0.4/Executables/math&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the head or login node enviroment variables are properly set, one can create a SLURM script&lt;br /&gt;
to run an Mathematica job on a compute node and ensure that the head or login node environment&lt;br /&gt;
just set is passed on to the compute nodes by using the &amp;quot;#SLURM -V&amp;quot; option inside you SLURM script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -N mmat8_serial1&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -l select=1:ncpus=1:mem=1920mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
# Just point to the serial executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin Mathematica Serial Run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
math -run &amp;lt;test_run.nb &amp;gt; output&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   Mathematica Serial Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the PATH variable in the login environment is now includes the location of the Mathematica &lt;br /&gt;
executable and the &amp;quot;#SLURM -V&amp;quot; option ensures that this is passed to the compute node that the&lt;br /&gt;
job is run on, the last line of the SLURM script will be executed without environment-related problems.&lt;br /&gt;
&lt;br /&gt;
=== Example 2,  Less Basic From SALK (Cray System) ===&lt;br /&gt;
As do all of the systems at the CUNY HPC Center, the Cray SALK has multiple compiler, parallel programming&lt;br /&gt;
models, libraries, and applications.  In addition, SALK uses a custom high-performance interconnect with its&lt;br /&gt;
own libraries, has its own compiler suite and compiling system, and many other custom libraries.  Setting up&lt;br /&gt;
and/or tearing down a given environment that makes all this work correctly is more complicated that it is on &lt;br /&gt;
the other systems at the HPC Center.  Modules simplifies this process tremendously for the user.&lt;br /&gt;
&lt;br /&gt;
Here is an example of how to swap out the default Cray compiler environment on SALK and swap in the &lt;br /&gt;
compiler suite from the Portland Group including all the right MPI libraries from Cray.  In this case, we swap in&lt;br /&gt;
a new release of the Portland Group compilers, not the current default on the Cray, and the version of the &lt;br /&gt;
NETCDF library that has been compiled with the Portland group.&lt;br /&gt;
&lt;br /&gt;
Having logged into SALK, we determine what modules have been load by default with &amp;quot;module list&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module list&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) modules/3.2.6.6&lt;br /&gt;
  2) nodestat/2.2-1.0400.31264.2.5.gem&lt;br /&gt;
  3) sdb/1.0-1.0400.32124.7.19.gem&lt;br /&gt;
  4) MySQL/5.0.64-1.0000.5053.22.1&lt;br /&gt;
  5) lustre-cray_gem_s/1.8.6_2.6.32.45_0.3.2_1.0400.6453.5.1-1.0400.32127.1.90&lt;br /&gt;
  6) udreg/2.3.1-1.0400.4264.3.1.gem&lt;br /&gt;
  7) ugni/2.3-1.0400.4374.4.88.gem&lt;br /&gt;
  8) gni-headers/2.1-1.0400.4351.3.1.gem&lt;br /&gt;
  9) dmapp/3.2.1-1.0400.4255.2.159.gem&lt;br /&gt;
 10) xpmem/0.1-2.0400.31280.3.1.gem&lt;br /&gt;
 11) hss-llm/6.0.0&lt;br /&gt;
 12) Base-opts/1.0.2-1.0400.31284.2.2.gem&lt;br /&gt;
 13) xtpe-network-gemini&lt;br /&gt;
 14) cce/8.0.7&lt;br /&gt;
 15) acml/5.1.0&lt;br /&gt;
 16) xt-libsci/11.1.00&lt;br /&gt;
 17) pmi/3.0.0-1.0000.8661.28.2807.gem&lt;br /&gt;
 18) rca/1.0.0-2.0400.31553.3.58.gem&lt;br /&gt;
 19) xt-asyncpe/5.13&lt;br /&gt;
 20) atp/1.5.1&lt;br /&gt;
 21) PrgEnv-cray/4.0.46&lt;br /&gt;
 22) xtpe-mc8&lt;br /&gt;
 23) cray-mpich2/5.5.3&lt;br /&gt;
 24) SLURM/11.3.0.121723&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From the list, we see that the Cray Programming Environment (&amp;quot;PrgEnv-cray/4.0.46&amp;quot;) and the Cray Compiler&lt;br /&gt;
environment are loaded (&amp;quot;cce/8.0.7&amp;quot;) by default among other things (SLURM, MPICH, etc.).  To unload these&lt;br /&gt;
Cray modules and load in the Portland Group (PGI) equivalents we need to know the names of the PGI &lt;br /&gt;
modules.   The &amp;quot;module avail&amp;quot; command will tell us this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module avail&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
(several sections of output removed)&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
------------------------------------------------ /opt/modulefiles -----------------------------------------------------&lt;br /&gt;
Base-opts/1.0.2-1.0400.31284.2.2.gem(default)     gcc/4.1.2                                         SLURM/11.2.0.113417&lt;br /&gt;
PrgEnv-cray/3.1.61                                gcc/4.2.4                                         SLURM/11.3.0.121723(default)&lt;br /&gt;
PrgEnv-cray/4.0.46(default)                       gcc/4.4.2                                         petsc/3.1.08&lt;br /&gt;
PrgEnv-gnu/3.1.61                                 gcc/4.4.4                                         petsc/3.1.09&lt;br /&gt;
PrgEnv-gnu/4.0.46(default)                        gcc/4.5.1                                         petsc-complex/3.1.08&lt;br /&gt;
PrgEnv-intel/3.1.61                               gcc/4.5.2                                         petsc-complex/3.1.09&lt;br /&gt;
PrgEnv-intel/4.0.46(default)                      gcc/4.5.3                                         pgi/12.10&lt;br /&gt;
PrgEnv-pathscale/3.1.61                           gcc/4.6.1                                         pgi/12.3&lt;br /&gt;
PrgEnv-pathscale/4.0.46(default)                  gcc/4.7.1(default)                                pgi/12.6(default)&lt;br /&gt;
PrgEnv-pgi/3.1.61                                 hss-llm/6.0.0(default)                            pgi/12.8&lt;br /&gt;
PrgEnv-pgi/4.0.46(default)                        intel/12.1.1.256                                  wrf/3.3.0&lt;br /&gt;
acml/4.4.0                                        intel/12.1.4.319(default)                         wrf/3.4.0(default)&lt;br /&gt;
acml/5.1.0(default)                               intel/12.1.5.339                                  xt-asyncpe/5.01&lt;br /&gt;
admin-modules/1.0.2-1.0400.31284.2.2.gem(default) java/jdk1.6.0_24                                  xt-asyncpe/5.05&lt;br /&gt;
amber/12(default)                                 java/jdk1.7.0_03(default)                         xt-asyncpe/5.13(default)&lt;br /&gt;
cce/8.0.7(default)                                mazama/6.0.0(default)                             xt-libsci/11.0.00&lt;br /&gt;
chapel/1.4.0                                      modules/3.2.6.6(default)                          xt-libsci/11.0.04&lt;br /&gt;
chapel/1.5.0(default)                             mrnet/3.0.0(default)                              xt-libsci/11.1.00(default)&lt;br /&gt;
fftw/2.1.5.3                                      pathscale/4.0.12.1(default)                       xt-papi/4.2.0&lt;br /&gt;
fftw/3.2.2.1(default)                             pathscale/4.0.9                                   xt-papi/4.3.0(default)&lt;br /&gt;
fftw/3.3.0.1                                      SLURM/11.1.0.111761&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are several versions of the PGI compilers and two version of the PGI Programming Environment for&lt;br /&gt;
the Cray (SALK).   We are interested in loading PGI&#039;s 12.10 release (not the default which is &amp;quot;pgi/12.6&amp;quot;) and the&lt;br /&gt;
most current release of the PGI programming environment (&amp;quot;PrgEnv-pgi/4.0.46&amp;quot;), which is the default.  The &lt;br /&gt;
PGI programming environment for the Cray provides all the environmental settings required to use the &lt;br /&gt;
PGI compilers on the Cray which includes a number of custom libraries.  &lt;br /&gt;
&lt;br /&gt;
Here is a series of module commands to unload the Cray defaults, load the PGI modules mentioned,&lt;br /&gt;
and load version 4.2.0 of NETCDF compiled with the PGI compilers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module unload PrgEnv-cray&lt;br /&gt;
user@salk:~&amp;gt; module load PrgEnv-pgi&lt;br /&gt;
user@salk:~&amp;gt; module unload pgi&lt;br /&gt;
user@salk:~&amp;gt; module load pgi/12.10&lt;br /&gt;
user@salk:~&amp;gt; &lt;br /&gt;
user@salk:~&amp;gt; module load netcdf/4.2.0&lt;br /&gt;
user@salk:~&amp;gt;&lt;br /&gt;
user@salk;~&amp;gt; cc -V&lt;br /&gt;
/opt/cray/xt-asyncpe/5.13/bin/cc: INFO: Compiling with CRAYPE_COMPILE_TARGET=native.&lt;br /&gt;
&lt;br /&gt;
pgcc 12.10-0 64-bit target on x86-64 Linux &lt;br /&gt;
Copyright 1989-2000, The Portland Group, Inc.  All Rights Reserved.&lt;br /&gt;
Copyright 2000-2012, STMicroelectronics, Inc.  All Rights Reserved.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Several comments about this series of command are perhaps useful.  First, the first three commands&lt;br /&gt;
do not include version numbers and will therefore load or unload the current default versions.  In &lt;br /&gt;
the third line, we unload the default version of the PGI compiler (version 12.6) which is loaded with&lt;br /&gt;
the rest of the PGI Programming Environment in the second line.  We then load the non-default&lt;br /&gt;
and more recent release from PGI, version 12.10 in the fourth line.   Later, we load NETCDF version&lt;br /&gt;
4.2.0 which, because we have already loaded the PGI Programming Environment, will load the version&lt;br /&gt;
of NETCDF 4.2.0 compiled with the PGI compilers.  Finally, we check to see which compiler the Cray &amp;quot;cc&amp;quot;&lt;br /&gt;
compiler wrapper actually invokes after this sequence of module commands.  We see that indeed &amp;quot;pgcc&amp;quot;&lt;br /&gt;
version 12.10 is being used.&lt;br /&gt;
&lt;br /&gt;
We can confirm all this by again entering &amp;quot;module list&amp;quot;.   Notice that the Cray-related compiler modules&lt;br /&gt;
have been replaced by those from PGI and that NETCDF version 4.2.0 has been loaded.  We are ready&lt;br /&gt;
to use new PGI compiler suite based environment.  It is left as an exercise to the reader to figure out&lt;br /&gt;
how the series of commands listed above could have been shortened by using the &amp;quot;module swap&amp;quot; sub-&lt;br /&gt;
command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module list&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) modules/3.2.6.6&lt;br /&gt;
  2) nodestat/2.2-1.0400.31264.2.5.gem&lt;br /&gt;
  3) sdb/1.0-1.0400.32124.7.19.gem&lt;br /&gt;
  4) MySQL/5.0.64-1.0000.5053.22.1&lt;br /&gt;
  5) lustre-cray_gem_s/1.8.6_2.6.32.45_0.3.2_1.0400.6453.5.1-1.0400.32127.1.90&lt;br /&gt;
  6) udreg/2.3.1-1.0400.4264.3.1.gem&lt;br /&gt;
  7) ugni/2.3-1.0400.4374.4.88.gem&lt;br /&gt;
  8) gni-headers/2.1-1.0400.4351.3.1.gem&lt;br /&gt;
  9) dmapp/3.2.1-1.0400.4255.2.159.gem&lt;br /&gt;
 10) xpmem/0.1-2.0400.31280.3.1.gem&lt;br /&gt;
 11) hss-llm/6.0.0&lt;br /&gt;
 12) Base-opts/1.0.2-1.0400.31284.2.2.gem&lt;br /&gt;
 13) xtpe-network-gemini&lt;br /&gt;
 14) xtpe-mc8&lt;br /&gt;
 15) cray-mpich2/5.5.3&lt;br /&gt;
 16) SLURM/11.3.0.121723&lt;br /&gt;
 17) xt-libsci/11.1.00&lt;br /&gt;
 18) pmi/3.0.0-1.0000.8661.28.2807.gem&lt;br /&gt;
 19) xt-asyncpe/5.13&lt;br /&gt;
 20) atp/1.5.1&lt;br /&gt;
 21) PrgEnv-pgi/4.0.46&lt;br /&gt;
 22) pgi/12.10&lt;br /&gt;
 23) hdf5/1.8.8&lt;br /&gt;
 24) netcdf/4.2.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=BROWNIE&amp;diff=151</id>
		<title>BROWNIE</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=BROWNIE&amp;diff=151"/>
		<updated>2022-10-27T20:14:17Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;BROWNIE is installed on the Andy cluster under the directory &amp;quot;/share/apps/brownie/default/bin/&amp;quot;. The directory &amp;quot;/share/apps/brownie/default/examples/&amp;quot;&lt;br /&gt;
contains two example files. &lt;br /&gt;
&lt;br /&gt;
In order to run one of these examples on Andy follow the steps:&lt;br /&gt;
&lt;br /&gt;
1) create a directory and &amp;quot;cd&amp;quot; there:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir ./brownie_test  &amp;amp;&amp;amp; cd ./brownie_test&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2) Copy the example input deck to the current directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cp /share/apps/brownie/default/example/ratetest_example.nex ./&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3) Create a SLURM submit script. Use your favorite text editor to put the&lt;br /&gt;
following lines into file &amp;quot;brownie_serial.job&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name BROWNIE_Serial&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Point to the execution directory to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin BROWNIE Serial Run ...&amp;quot;&lt;br /&gt;
brownie ./ratetest_example.nex &amp;gt; brownie_ser.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   BROWNIE Serial Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4) Load the BROWNIE module to include all required environmental&lt;br /&gt;
variables and the path to the BROWNIE executable (the modules&lt;br /&gt;
utility is discussed in detail above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load brownie&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5) Submit the job to the SLURM queue using:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub brownie_serial.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running the rate test case should take less than 15 minutes and will produce SLURM output and error files beginning&lt;br /&gt;
with the job name &#039;BROWNIE_serial&#039;. The primary BROWNIE application results will be written into the user-specified&lt;br /&gt;
file at the end of the BROWNIE command line after the greater-than sign. Here it is named &#039;brownie_ser.out&#039;.  The&lt;br /&gt;
expression &#039;2&amp;gt;&amp;amp;1&#039; combines Unix standard output from the program with Unix standard error.  Users should always&lt;br /&gt;
explicitly specify the name of the application&#039;s output file in this way to ensure that it is written directly into the user&#039;s&lt;br /&gt;
working directory which has much more disk space than the SLURM spool directory on /var.&lt;br /&gt;
&lt;br /&gt;
Details on the meaning of the SLURM script are covered below in the SLURM section. The most important lines are the &#039;#SLURM --nodes=1 ntasks=1 mem=2880&#039;.  The first instructs SLURM to select 1 resource &#039;chunk&#039; with 1 processor (core) and 2,880 MBs&lt;br /&gt;
of memory in it for the job. The second instructs SLURM to place this job wherever the least used resources are found (freely).&lt;br /&gt;
The master compute node that SLURM finally selects to run your job will be printed in the SLURM output file by the &#039;hostname&#039;&lt;br /&gt;
command.&lt;br /&gt;
&lt;br /&gt;
One can check the status of the job using &amp;quot;qstat&amp;quot; command. Upon successful completion&lt;br /&gt;
the following files will be generated:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
BrownieBatch.nex&lt;br /&gt;
brownie_test.eXXXX   --- std error from SLURM&lt;br /&gt;
BrownieLog.txt&lt;br /&gt;
brownie_test.oXXXX  --- std output from SLURM&lt;br /&gt;
RatetestOutput.txt --- result returned by Brownie&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=BOWTIE2&amp;diff=150</id>
		<title>BOWTIE2</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=BOWTIE2&amp;diff=150"/>
		<updated>2022-10-27T20:13:32Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;At the CUNY HPC Center BOWTIE2 is installed on ANDY and PENZIAS. BOWTIE2 is a parallel threaded code (pthreads)&lt;br /&gt;
that takes its input from a simple text file provided on the command line. Below is an example SLURM script that will run&lt;br /&gt;
the lambda virus test case provided with the BOWTIE2 distribution which can be copied from the local installation directory&lt;br /&gt;
to your current location as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cp /share/apps/bowtie2/default/examples/reference/lambda_virus.fa .&lt;br /&gt;
cp /share/apps/bowtie2/default/examples/reads/reads_1.fq . &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To include all required environmental variables and the path to the BOWTIE2 executable run the modules load command (the&lt;br /&gt;
modules utility is discussed in detail above).  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load bowtie2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running &#039;bowtie2&#039; from the interactive prompt without any options will provide a brief description of the form of the &lt;br /&gt;
command-line arguments and options. Here is SLURM batch script that builds lambda virus the index and aligns the sequences in serial mode:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name BOWTIE2_Serial&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Point to the execution directory to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin BOWTIE2 Serial Run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Build Index ...&amp;quot;&lt;br /&gt;
bowtie2-build lambda_virus.fa lambda_virus &amp;gt; lambda_virus_index.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Align Sequence ...&amp;quot;&lt;br /&gt;
bowtie2 -x lambda_virus -U reads_1.fq -S eg1.sam &amp;gt; lambda_virus_align.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   BOWTIE2 Serial Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This script can be dropped in to a file (say bowtie.job) and started with the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub bowtie2.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running the lambda virus test case should take less than 2 minutes and will produce SLURM output and error files beginning with&lt;br /&gt;
the job name &#039;BOWTIE2_serial&#039;. The primary BOWTIE2 application results will be written into the user-specified file at the end&lt;br /&gt;
of the CUFFLINKS command line after the greater-than sign. Here it is named &#039;lambda_virus_index.out&#039; and &#039;lambda_virus_align.out.&#039;&lt;br /&gt;
The expression &#039;2&amp;gt;&amp;amp;1&#039; at the end of the command-line combines Unix standard output from the program with Unix standard error.&lt;br /&gt;
Users should always explicitly specify the name of the application&#039;s output file in this way to ensure that it is written directly into&lt;br /&gt;
the user&#039;s working directory which has much more disk space than the SLURM spool directory on /var.&lt;br /&gt;
&lt;br /&gt;
Details on the meaning of the SLURM script are covered below in the SLURM section. The most important lines are the &#039;#SBATCH --nodes=1 ntasks=1 mem=2880&#039;.  The first instructs SLURM to select 1 resource &#039;chunk&#039; with 1 processor (core) and 2,880 MBs&lt;br /&gt;
of memory in it for the job. The second instructs SLURM to place this job wherever the least used resources are found (freely).&lt;br /&gt;
The master compute node that SLURM finally selects to run your job will be printed in the SLURM output file by the &#039;hostname&#039;&lt;br /&gt;
command.&lt;br /&gt;
&lt;br /&gt;
To run BOWTIE2 in parallel-threads mode several changes to the script are required.  Here is a modified script&lt;br /&gt;
that shows how to run BOWTIE2 using two threads.  ANDY has as many as 8 physical compute cores per compute&lt;br /&gt;
node and therefore as many as 8 threads might be chosen, but the larger the number of cores-threads requested&lt;br /&gt;
the longer the job may wait to start as SLURM looks for a compute node with the free resources requested.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name BOWTIE2_threads&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=2&lt;br /&gt;
#SBATCH --mem=5760&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Point to the execution directory to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin BOWTIE2 Threaded Run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Build Index ...&amp;quot;&lt;br /&gt;
bowtie2-build lambda_virus.fa lambda_virus &amp;gt; lambda_virus_index.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Align Sequence ...&amp;quot;&lt;br /&gt;
bowtie2 -p 2 -x lambda_virus -U reads_1.fq -S eg1.sam &amp;gt; lambda_virus_align2.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   BOWTIE2 Threaded Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notice the difference in the &#039;-l select&#039; line where the resource &#039;chunk&#039; now includes 2 cores (ntasks=2) and requests&lt;br /&gt;
twice as much memory as before.  Also, notice that the BOWTIE2 command-line now includes the &#039;-p 2&#039; option to&lt;br /&gt;
run the code with 2 threads working in parallel.   Perfectly or &#039;embarrassingly&#039; parallel workloads can run close to&lt;br /&gt;
2, 4, or more times as fast as the same workload in serial mode depending on the number of threads requested, but&lt;br /&gt;
workloads cannot be counted on to be perfectly parallel. &lt;br /&gt;
&lt;br /&gt;
The speed ups that you observe will typically be less than perfect and diminish as you ask for more cores-threads.&lt;br /&gt;
Larger jobs will typically scale more efficiently as you add cores-threads, but users should take note of the performance&lt;br /&gt;
gains that they see as cores-threads are added and select a core-thread count the provides efficient scaling and avoids&lt;br /&gt;
diminishing returns.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=QIIME&amp;diff=149</id>
		<title>QIIME</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=QIIME&amp;diff=149"/>
		<updated>2022-10-27T20:13:08Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Qiime is installed in Python/anaconda environment. There are 2 builds of Python/Anaconda - the one with python interpreter 2.7.13 and one with python interpreter 3.6.0. The text below refers to python 2.7.13.  In order to use qiime the users must load module for qiime and activate the environment.  The following 2 lines will do the job: &lt;br /&gt;
 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load qiime/1.9.1_FULL_P2.7&lt;br /&gt;
source activate qiime1&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The P2.7 above indicates that the used python interpreter is python 2.7.X. Because QIIME is a &#039;&#039;&#039;pipeline&#039;&#039;&#039; that relies heavily upon external applications, it has a variety of ways in which is &amp;quot;parallelizes&amp;quot; tasks for multiprocessor computation.  These include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. Multi-threading on a single node &lt;br /&gt;
2. Auto-generating serial jobs to the cluster, each working on a separate subtask (&amp;quot;Workers&amp;quot;)&lt;br /&gt;
3. Actual MPI&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Depending upon which subset of QIMME&#039;s functionality  is needed, the user may use one or more of the above forms of parallelization.  During the initial start up QIIME is concentrated on &amp;quot;de noising&amp;quot; which can take significant resources. Passing this phase the QIIME auto generates user specified number of threads (single node use) or node thread jobs.  QIIME subdivides the denoising tasks into tasks that are handled at various stages by individual threads or worker jobs (depending upon setup). However the documentation on QIIME parallelization is not very explanatory, but is clear that the user can utilize EITHER auto-job generation OR node threading but NEVER BOTH in the same time.  QIIME supplies few scripts for determining how subtasks are to be processed -  &#039;&#039;worker jobs (cluster) or threads (single node)&#039;&#039;.  The relevant scripts are as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
start_parallel_jobs.py             - used for threading on a single node&lt;br /&gt;
start_parallel_jobs_torque.py - used for &#039;&#039;&#039;auto-generating&#039;&#039;&#039; single node, single core cluster worker jobs. &lt;br /&gt;
                                                Please note that the utilization of the node is 1 node by 1 core. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the user invokes the qiime for the first time is good idea to check if all modules are properly connected. Thus on a first run users should run the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
print_qiime_config.py -t&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above command will print initial set-up of qiime and will pass/fail internal tests.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(qiime1) [user_id@penzias ~]$ print_qiime_config.py -t&lt;br /&gt;
&lt;br /&gt;
System information&lt;br /&gt;
==================&lt;br /&gt;
         Platform:	linux2&lt;br /&gt;
   Python version:	2.7.13 |Continuum Analytics, Inc.| (default, Dec 20 2016, 23:09:15)  [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]&lt;br /&gt;
Python executable:	/share/usr/compilers/python/miniconda2/envs/qiime1/bin/python&lt;br /&gt;
&lt;br /&gt;
QIIME default reference information&lt;br /&gt;
===================================&lt;br /&gt;
For details on what files are used as QIIME&#039;s default references, see here:&lt;br /&gt;
 https://github.com/biocore/qiime-default-reference/releases/tag/0.1.3&lt;br /&gt;
&lt;br /&gt;
Dependency versions&lt;br /&gt;
===================&lt;br /&gt;
          QIIME library version:	1.9.1&lt;br /&gt;
           QIIME script version:	1.9.1&lt;br /&gt;
qiime-default-reference version:	0.1.3&lt;br /&gt;
                  NumPy version:	1.10.4&lt;br /&gt;
                  SciPy version:	0.17.1&lt;br /&gt;
                 pandas version:	0.18.1&lt;br /&gt;
             matplotlib version:	1.4.3&lt;br /&gt;
            biom-format version:	2.1.5&lt;br /&gt;
                   h5py version:	2.6.0 (HDF5 version: 1.8.16)&lt;br /&gt;
                   qcli version:	0.1.1&lt;br /&gt;
                   pyqi version:	0.3.2&lt;br /&gt;
             scikit-bio version:	0.2.3&lt;br /&gt;
                 PyNAST version:	1.2.2&lt;br /&gt;
                Emperor version:	0.9.51&lt;br /&gt;
                burrito version:	0.9.1&lt;br /&gt;
       burrito-fillings version:	0.1.1&lt;br /&gt;
              sortmerna version:	SortMeRNA version 2.0, 29/11/2014&lt;br /&gt;
              sumaclust version:	SUMACLUST Version 1.0.00&lt;br /&gt;
                  swarm version:	Swarm 1.2.19 [Mar  5 2016 16:56:02]&lt;br /&gt;
                          gdata:	Installed.&lt;br /&gt;
&lt;br /&gt;
QIIME config values&lt;br /&gt;
===================&lt;br /&gt;
For definitions of these settings and to learn how to configure QIIME, see here:&lt;br /&gt;
 http://qiime.org/install/qiime_config.html&lt;br /&gt;
 http://qiime.org/tutorials/parallel_qiime.html&lt;br /&gt;
&lt;br /&gt;
                     blastmat_dir:	None&lt;br /&gt;
      pick_otus_reference_seqs_fp:	/share/usr/compilers/python/miniconda2/envs/qiime1/lib/python2.7/site-packages/qiime_default_reference/gg_13_8_otus/rep_set/97_otus.fasta&lt;br /&gt;
                         sc_queue:	all.q&lt;br /&gt;
      topiaryexplorer_project_dir:	None&lt;br /&gt;
     pynast_template_alignment_fp:	/share/usr/compilers/python/miniconda2/envs/qiime1/lib/python2.7/site-packages/qiime_default_reference/gg_13_8_otus/rep_set_aligned/85_otus.pynast.fasta&lt;br /&gt;
                  cluster_jobs_fp:	start_parallel_jobs.py&lt;br /&gt;
pynast_template_alignment_blastdb:	None&lt;br /&gt;
assign_taxonomy_reference_seqs_fp:	/share/usr/compilers/python/miniconda2/envs/qiime1/lib/python2.7/site-packages/qiime_default_reference/gg_13_8_otus/rep_set/97_otus.fasta&lt;br /&gt;
                     torque_queue:	production&lt;br /&gt;
                    jobs_to_start:	4&lt;br /&gt;
                       slurm_time:	None&lt;br /&gt;
            denoiser_min_per_core:	50&lt;br /&gt;
assign_taxonomy_id_to_taxonomy_fp:	/share/usr/compilers/python/miniconda2/envs/qiime1/lib/python2.7/site-packages/qiime_default_reference/gg_13_8_otus/taxonomy/97_otu_taxonomy.txt&lt;br /&gt;
                         temp_dir:	/tmp/&lt;br /&gt;
                     slurm_memory:	None&lt;br /&gt;
                      slurm_queue:	None&lt;br /&gt;
                      blastall_fp:	blastall&lt;br /&gt;
                 seconds_to_sleep:	1&lt;br /&gt;
&lt;br /&gt;
QIIME base install test results&lt;br /&gt;
===============================&lt;br /&gt;
.........&lt;br /&gt;
----------------------------------------------------------------------&lt;br /&gt;
Ran 9 tests in 0.021s&lt;br /&gt;
&lt;br /&gt;
OK&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each user must create the following &#039;&#039;&#039;qiime.config&#039;&#039;&#039; in his/her own home directory. The file (for cluster environment) is provided below, so the users must copy and paste the file. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cluster_jobs_fp /share/usr/compilers/python/miniconda2/envs/qiime1/bin/start_parallel_jobs_torque.py&lt;br /&gt;
python_exe_fp /share/usr/compilers/python/miniconda2/envs/qiime1/bin/python&lt;br /&gt;
working_dir  $HOME&lt;br /&gt;
blastmat_dir $HOME&lt;br /&gt;
blastall_fp blastall&lt;br /&gt;
pynast_template_alignment_fp&lt;br /&gt;
pynast_template_alignment_blastdb&lt;br /&gt;
template_alignment_lanemask_fp&lt;br /&gt;
jobs_to_start 4&lt;br /&gt;
seconds_to_sleep 60&lt;br /&gt;
qiime_scripts_dir /share/usr/compilers/python/miniconda2/envs/qiime1/bin&lt;br /&gt;
temp_dir /tmp/&lt;br /&gt;
denoiser_min_per_core 50&lt;br /&gt;
cloud_environment False&lt;br /&gt;
topiaryexplorer_project_dir&lt;br /&gt;
torque_queue main&lt;br /&gt;
assign_taxonomy_reference_seqs_fp&lt;br /&gt;
assign_taxonomy_id_to_taxonomy_fp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lines which may be altered by the user are:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cluster_jobs_fp&lt;br /&gt;
   - Set to &amp;quot;start_parallel_jobs_torque.py&amp;quot; for cluster jobs&lt;br /&gt;
   - Set to &amp;quot;start_parallel_jobs.py&amp;quot; for single node threading&lt;br /&gt;
working_dir   &lt;br /&gt;
blastmat_dir  (blast matrices location)&lt;br /&gt;
jobs_to_start  (the MAXIMUM number of jobs to auto-spawn)&lt;br /&gt;
seconds_to_sleep (change is NOT recommended)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lines which user must look carefully during:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qiime_scripts_dir - must be sure to always point to /share/usr/compilers/python/miniconda2/envs/qiime1/bin &lt;br /&gt;
torque_queue      - must be set to &amp;quot;production&amp;quot;  which is main queue to PENZIAS.  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The meaning of each of the lines of qiime.config is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cluster_jobs_fp : path to your cluster jobs file.  &lt;br /&gt;
&lt;br /&gt;
python_exe_fp : path to python executable. Just use python. &lt;br /&gt;
working_dir : a directory where work should be performed when running in parallel. USUALLY $HOME&lt;br /&gt;
&lt;br /&gt;
blastmat_dir : directory where BLAST substitution matrices are stored&lt;br /&gt;
&lt;br /&gt;
blastall_fp : path to blastall executable&lt;br /&gt;
&lt;br /&gt;
pynast_template_alignment_fp : default template alignment to use with PyNAST as a fasta file&lt;br /&gt;
&lt;br /&gt;
pynast_template_alignment_blastdb : default template alignment to use with PyNAST as a pre-formatted BLAST database&lt;br /&gt;
&lt;br /&gt;
template_alignment_lanemask_fp : default alignment lanemask to use with filter_alignment.py&lt;br /&gt;
&lt;br /&gt;
jobs_to_start : default number of jobs to start when running QIIME in parallel. &lt;br /&gt;
                      don’t make this more than the available cores/processors on your system&lt;br /&gt;
&lt;br /&gt;
seconds_to_sleep : number of seconds to wait when checking whether parallel jobs have completed&lt;br /&gt;
&lt;br /&gt;
qiime_scripts_dir : directory where QIIME scripts can be found&lt;br /&gt;
&lt;br /&gt;
temp_dir : directory for storing temporary files created by QIIME scripts. when a script completes successfully, any &lt;br /&gt;
                 temporary files that it created are cleaned up.&lt;br /&gt;
&lt;br /&gt;
denoiser_min_per_core : minimum number of flowgrams to denoise per core in parallel denoiser runs&lt;br /&gt;
&lt;br /&gt;
cloud_environment : used only by the n3phele system. you should not need to modify this value&lt;br /&gt;
&lt;br /&gt;
topiaryexplorer_project_dir : directory where TopiaryExplorer is installed&lt;br /&gt;
&lt;br /&gt;
torque_queue : default queue to submit jobs to when using parallel QIIME with torque or SLURM. DEFAULT is production&lt;br /&gt;
&lt;br /&gt;
assign_taxonomy_reference_seqs_fp : default reference database to use with assign_taxonomy.py (and parallel versions)&lt;br /&gt;
&lt;br /&gt;
assign_taxonomy_id_to_taxonomy_fp : default id-to-taxonomy map to use with assign_taxonomy.py (and parallel versions)&lt;br /&gt;
&lt;br /&gt;
sc_queue : default queue to submit jobs to when running parallel QIIME on StarCluster&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In above script the jobs_to_start value must match number of cores requested in SLURM script. For example #SLURM -l select=1:ncpus=4 will require jobs_to_start also to have value 4.  By default, QIIME is configured to run parallel jobs on systems without a queueing system (e.g., your laptop, desktop, or single AWS instance), where user tells a parallel &lt;br /&gt;
QIIME script how many jobs should be submitted.  On HPC systems however the QIIME should be submitted via SLURM batch script. In this case there are 2 scenarios:&lt;br /&gt;
&lt;br /&gt;
      1. Jobs submitted for multithread on a single node&lt;br /&gt;
      2. Jobs submitted as single thread on single core across several nodes.&lt;br /&gt;
 &lt;br /&gt;
To make QIIME parallelization scheme useful the users may have to consider to make a separate script which holds the list of &amp;quot;tasks&amp;quot; that have to be started on cluster. &lt;br /&gt;
The file is usual text file so it must be created with text editor - vi or edit in Linux environment. Please do not use word processor to create this file.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pick_otus.py -i inseqs_file1.fasta&lt;br /&gt;
pick_otus.py -i inseqs_file2.fasta&lt;br /&gt;
pick_otus.py -i inseqs_file3.fasta&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The file is named &#039;&#039;&#039;test_jobs.txt&#039;&#039;&#039;. In order to be used it has to be called by one of the cluster job scripts described above.  If passed to a cluster jobs script, above 3 lines &lt;br /&gt;
should start three separate jobs corresponding to each of the commands. The  name of cluster job script is defined in &#039;&#039;&#039;cluster_jobs_fp&#039;&#039;&#039; variable in &#039;&#039;&#039;qiime.config&#039;&#039;&#039; file. &lt;br /&gt;
Remember that every user must have the qiime.conf file in his/her directory and the later file must be properly edited.  The general syntax of passing the test_jobs.txt to &lt;br /&gt;
cluster job script is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
CLUSTER_JOBS_FP -ms job_list.txt JOB_ID&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here CLUSTER_JOBS_FP is PATH to cluster job file (start_parallel_jobs.py OR start_parallel_jobs_torque.py). Then SLURM script to start multithread job on node on PENZIAS &lt;br /&gt;
looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name qime_test_job&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
/share/usr/compilers/python/miniconda2/envs/qiime1/bin/start_parallel_jobs.py -ms test_jobs.txt  JOB_ID  -q production&lt;br /&gt;
echo &amp;quot;QIIME  job is done.&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that maximum number of threads is auto generated but limited from above from the value in qiime.conf. In above script  JOB_ID is prefix which will be &lt;br /&gt;
added to each of the jobs. -ms are options for start_parallel_jobs.py. The first one means &amp;quot;make&amp;quot; the second &amp;quot;submit&amp;quot;.  The same JOB_ID is also used by the QIIME &lt;br /&gt;
parallel scripts when creating names for temporary files and directories, but user script does not necessarily need to do anything with this information. The parallel &lt;br /&gt;
variants of the scripts use the same parameters as the serial versions of the scripts, with some additional options in the parallel scripts.&lt;br /&gt;
&lt;br /&gt;
Next option is to use 1 core on 1 node parallelization. To do so the other script start_parallel_jobs_torque.py must me used. This files takes the following &lt;br /&gt;
parameters/options:   &lt;br /&gt;
&lt;br /&gt;
Syntax: start_parallel_jobs_torque.py [options]&lt;br /&gt;
&lt;br /&gt;
Input Arguments: [OPTIONAL]&lt;br /&gt;
&lt;br /&gt;
-m, --make_jobs       Make the job files [default: None]&lt;br /&gt;
-s, --submit_jobs      Submit the job files [default: None]&lt;br /&gt;
-q, --queue               Name of queue to submit to [default: friendlyq]&lt;br /&gt;
-j, --job_dir               Directory to store the jobs [default: jobs/]&lt;br /&gt;
-w, --max_walltime   Maximum time in hours the job will run for [default: 72]&lt;br /&gt;
-c, --cpus                  Number of CPUs to use [default:1]&lt;br /&gt;
-n, --nodes                Number of nodes to use [default:1]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user want to submit the parallel job on PENZIAS. First the user must create a cluster job script similar to test_jobs.txt which should hold programs to &lt;br /&gt;
be executed - one per command line just as it was shown above. Then the following script can be used to run on 4 nodes :&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name qime_test_job&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
start_parallel_jobs_torque.py -ms test_jobs.txt -c 1 -n 4 -q production&lt;br /&gt;
echo &amp;quot;QIIME  job is done.&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Finally below is an example how to run the QIIME script for SERIAL align_seqs.py on penzias:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name qime_test_job&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
align_seqs.py -i input.fasta -m muscle&lt;br /&gt;
echo &amp;quot;QIIME  job is done.&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And in PARALLEL:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name qime_test_job&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
start_parallel_jobs_torque.py -ms test_jobs.txt -c 4 -n 4 -q production&lt;br /&gt;
echo &amp;quot;QIIME  job is done.&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
The users are strongly encouraged to familiarize themselves with QIIME tutorials available at :  http://qiime.org/tutorials/index.html&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Modules,_Managing_Your_CUNY_HPC_Center_Environment&amp;diff=148</id>
		<title>Modules, Managing Your CUNY HPC Center Environment</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Modules,_Managing_Your_CUNY_HPC_Center_Environment&amp;diff=148"/>
		<updated>2022-10-27T20:13:05Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Modules, Managing Your CUNY HPC Center Environment =&lt;br /&gt;
Modules is a software package that provides for the fast and convenient management of the components of&lt;br /&gt;
a user&#039;s environment via &#039;&#039;&#039;modulefiles&#039;&#039;&#039;.  When executed by the module command each module file fully &lt;br /&gt;
configures the environment for its associated application or application group.  The modules configuration&lt;br /&gt;
language allows for the management of applications environment conflicts and dependencies as well.&lt;br /&gt;
The modules software allows users to load (and unload and reload) an application and/or system environment&lt;br /&gt;
that is specific to their needs and avoids the need to set and manage a large, one-size-fits-all, generic environment&lt;br /&gt;
for everyone at login.  &lt;br /&gt;
&lt;br /&gt;
Modules has been the default approach to managing the user applications environment on SALK, the CUNY HPC&lt;br /&gt;
Center Cray, since its installation in 2011.  By the end of 2012, all non-legacy and future systems at the CUNY HPC &lt;br /&gt;
Center will use modules to manage the user environment instead of generic environmental initialization files stored&lt;br /&gt;
in /etc/profile.d.  The only system that will need to transition from this older approach to the all-modules approach &lt;br /&gt;
will be ANDY.  All new systems, such as Penzias and the new SGI UV2, will come up as modules-based when they &lt;br /&gt;
are ready for production use.  The legacy system, BOB, is currently used almost entirely for Gaussian jobs&lt;br /&gt;
will NOT be reconfigured with the modules software.  Module version 3.2.6 is installed on SALK, and version 3.2.9&lt;br /&gt;
will be the default on the other HPC Center systems.&lt;br /&gt;
&lt;br /&gt;
Using the module package users can easily set a collection of environmental variables that are specific to their&lt;br /&gt;
compilation, parallel programming, and/or application requirements on the HPC Center&#039;s systems. The modules system&lt;br /&gt;
also makes it convenient to advance or regress compiler, parallel programming, or applications versions when defaults&lt;br /&gt;
are found to have bugs or performance issues.  Whatever the task, the modules package can adjust the environment&lt;br /&gt;
in an orderly way altering or setting of such environmental variables as PATH, MANPATH, LD_LIBRARY_PATH, etc.&lt;br /&gt;
and providing some basic descriptive information about the application version being loaded and purpose of the&lt;br /&gt;
modules file through the module help facility.   &lt;br /&gt;
&lt;br /&gt;
In addition to each application-specific modulefile, the module package functions through the use of a collection of&lt;br /&gt;
sub-commands given after the initial module command itself as in &amp;quot;module list&amp;quot; for instance.  All these module sub-&lt;br /&gt;
command are described in detail in the module man page (&amp;quot;man module&amp;quot;), but a list of some of the more important&lt;br /&gt;
and commonly used sub-commands is provided here:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Module sub-commands:&lt;br /&gt;
&lt;br /&gt;
list&lt;br /&gt;
load&lt;br /&gt;
unload&lt;br /&gt;
switch&lt;br /&gt;
avail&lt;br /&gt;
show&lt;br /&gt;
help&lt;br /&gt;
purge&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Modules, Learning by Example ==&lt;br /&gt;
The best way to explain how to use the modules package and its sub-command is to consider some simple&lt;br /&gt;
examples of a typical workflows that involve modules.  Here are two examples.  Again, for a more complete&lt;br /&gt;
description of the modules package please refer to &amp;quot;man module&amp;quot;.&lt;br /&gt;
=== Example 1,  Basic Non-Cray System ===&lt;br /&gt;
Consider the unmodified PATH variable right after login to one of the CUNY HPC Center systems.&lt;br /&gt;
Without any custom or local environmental path settings, it would look something like this with no&lt;br /&gt;
compiler, parallel programming model, or application-specific information in it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; echo $PATH | tr -s &#039;:&#039; &#039;\n&#039;&lt;br /&gt;
/home/username/bin&lt;br /&gt;
/usr/local/bin&lt;br /&gt;
/usr/bin&lt;br /&gt;
/bin&lt;br /&gt;
/usr/bin/X11&lt;br /&gt;
/usr/X11R6/bin&lt;br /&gt;
/usr/games&lt;br /&gt;
/opt/c3/bin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We take note that there appears to be no path to the application that we are interested in running which is Wolfram&#039;s&lt;br /&gt;
Mathematica in this example.  Typing &amp;quot;which math&amp;quot; to find Mathematica (&amp;quot;math&amp;quot; is the command-line name for Mathematica)&lt;br /&gt;
at the terminal yields:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
username@service0:~&amp;gt;  which math&lt;br /&gt;
which: no math in (/home/username/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/opt/c3/bin)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Mathematica executable &amp;quot;math&amp;quot; is not found in the default PATH variabl defined by the system at login. Modules can be&lt;br /&gt;
used to remedy this problem by adding the required path.  To check which module files (if any) are already loaded into&lt;br /&gt;
our environment, we are can type the &amp;quot;module list&amp;quot; sub-command at the terminal prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; module list&lt;br /&gt;
No Modulefiles Currently Loaded.&lt;br /&gt;
username@service0:~&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
No modules loaded.  So the module file for Mathematica has not been loaded and it is no surprise&lt;br /&gt;
that the Mathematica command-line &amp;quot;math&amp;quot; was not found.  The next question is has the HPC Center&lt;br /&gt;
installed Mathematica on this system and created a module file for it?  To find this out we use &lt;br /&gt;
the &amp;quot;module avail&amp;quot; sub-command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; module avail&lt;br /&gt;
---------------------------- /share/apps/modules/default/modulefiles_UserApplications --------------------------------------&lt;br /&gt;
&lt;br /&gt;
adf/2012.01(default)         cesm/1.0.3                   hoomd/0.9.2(default)         ncar/5.2.0_NCL(default)      pgi/12.3(default)&lt;br /&gt;
auto3dem/4.02(default)       cesm/1.0.4(default)          intel/12.1.3.293(default)    nwchem/6.1.1(default)        phoenics/2009(default)&lt;br /&gt;
autodock/4.2.3(default)      cuda/5.0(default)            ls-dyna/6.0.0(default)       octopus/4.0.0(default)       r/2.14.1(default)&lt;br /&gt;
beagle/0.2(default)          gromacs/4.5.5_32bit          mathematica/8.0.4(default)   openmpi/1.5.5_intel(default) wrf/3.4.0(default)&lt;br /&gt;
best/2.2L(default)           gromacs/4.5.5_64bit(default) matlab/R2012a(default)       openmpi/1.5.5_pgi&lt;br /&gt;
&lt;br /&gt;
--------------------------------- /share/apps/modules/default/modulefiles_System -------------------------------------------&lt;br /&gt;
&lt;br /&gt;
module-info   modules       version/3.2.9&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The listing shows all available module files on this system, both those that are user-application related&lt;br /&gt;
and those that are more system related.  As shown in the output, these two types of module files are &lt;br /&gt;
stored in different directories. Looking through the application list, there is a module for Mathematica&lt;br /&gt;
version 8.0.4, which is also happens to be the default.  On this system, the modules package has only&lt;br /&gt;
just been installed, and therefore only one version of each application has been adapted to the module&lt;br /&gt;
system and that version is the default.&lt;br /&gt;
&lt;br /&gt;
The module file that is responsible for setting up correct environment needed to run Mathematica can &lt;br /&gt;
now be loaded:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mathematica&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Because there is only one version and it is the default, there is no need to include the version-specific&lt;br /&gt;
extension to load it.   To explicitly load version 8.0.4 (or any other specific and non-default version)&lt;br /&gt;
one would use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mathematica/8.0.4&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After loading, the environmental PATH variable includes the path to Mathematica:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; echo $PATH | tr -s &#039;:&#039; &#039;\n&#039;&lt;br /&gt;
/home/username/bin&lt;br /&gt;
/usr/local/bin&lt;br /&gt;
/usr/bin&lt;br /&gt;
/bin&lt;br /&gt;
/usr/bin/X11&lt;br /&gt;
/usr/X11R6/bin&lt;br /&gt;
/usr/games&lt;br /&gt;
/opt/c3/bin&lt;br /&gt;
/share/apps/mathematica/8.0.4/Executables&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be verified by rerunning the &amp;quot;which math&amp;quot; command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; which math&lt;br /&gt;
/share/apps/mathematica/8.0.4/Executables/math&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the head or login node enviroment variables are properly set, one can create a SLURM script&lt;br /&gt;
to run an Mathematica job on a compute node and ensure that the head or login node environment&lt;br /&gt;
just set is passed on to the compute nodes by using the &amp;quot;#SLURM -V&amp;quot; option inside you SLURM script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -N mmat8_serial1&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -l select=1:ncpus=1:mem=1920mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
# Just point to the serial executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin Mathematica Serial Run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
math -run &amp;lt;test_run.nb &amp;gt; output&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   Mathematica Serial Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the PATH variable in the login environment is now includes the location of the Mathematica &lt;br /&gt;
executable and the &amp;quot;#SLURM -V&amp;quot; option ensures that this is passed to the compute node that the&lt;br /&gt;
job is run on, the last line of the SLURM script will be executed without environment-related problems.&lt;br /&gt;
&lt;br /&gt;
=== Example 2,  Less Basic From SALK (Cray System) ===&lt;br /&gt;
As do all of the systems at the CUNY HPC Center, the Cray SALK has multiple compiler, parallel programming&lt;br /&gt;
models, libraries, and applications.  In addition, SALK uses a custom high-performance interconnect with its&lt;br /&gt;
own libraries, has its own compiler suite and compiling system, and many other custom libraries.  Setting up&lt;br /&gt;
and/or tearing down a given environment that makes all this work correctly is more complicated that it is on &lt;br /&gt;
the other systems at the HPC Center.  Modules simplifies this process tremendously for the user.&lt;br /&gt;
&lt;br /&gt;
Here is an example of how to swap out the default Cray compiler environment on SALK and swap in the &lt;br /&gt;
compiler suite from the Portland Group including all the right MPI libraries from Cray.  In this case, we swap in&lt;br /&gt;
a new release of the Portland Group compilers, not the current default on the Cray, and the version of the &lt;br /&gt;
NETCDF library that has been compiled with the Portland group.&lt;br /&gt;
&lt;br /&gt;
Having logged into SALK, we determine what modules have been load by default with &amp;quot;module list&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module list&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) modules/3.2.6.6&lt;br /&gt;
  2) nodestat/2.2-1.0400.31264.2.5.gem&lt;br /&gt;
  3) sdb/1.0-1.0400.32124.7.19.gem&lt;br /&gt;
  4) MySQL/5.0.64-1.0000.5053.22.1&lt;br /&gt;
  5) lustre-cray_gem_s/1.8.6_2.6.32.45_0.3.2_1.0400.6453.5.1-1.0400.32127.1.90&lt;br /&gt;
  6) udreg/2.3.1-1.0400.4264.3.1.gem&lt;br /&gt;
  7) ugni/2.3-1.0400.4374.4.88.gem&lt;br /&gt;
  8) gni-headers/2.1-1.0400.4351.3.1.gem&lt;br /&gt;
  9) dmapp/3.2.1-1.0400.4255.2.159.gem&lt;br /&gt;
 10) xpmem/0.1-2.0400.31280.3.1.gem&lt;br /&gt;
 11) hss-llm/6.0.0&lt;br /&gt;
 12) Base-opts/1.0.2-1.0400.31284.2.2.gem&lt;br /&gt;
 13) xtpe-network-gemini&lt;br /&gt;
 14) cce/8.0.7&lt;br /&gt;
 15) acml/5.1.0&lt;br /&gt;
 16) xt-libsci/11.1.00&lt;br /&gt;
 17) pmi/3.0.0-1.0000.8661.28.2807.gem&lt;br /&gt;
 18) rca/1.0.0-2.0400.31553.3.58.gem&lt;br /&gt;
 19) xt-asyncpe/5.13&lt;br /&gt;
 20) atp/1.5.1&lt;br /&gt;
 21) PrgEnv-cray/4.0.46&lt;br /&gt;
 22) xtpe-mc8&lt;br /&gt;
 23) cray-mpich2/5.5.3&lt;br /&gt;
 24) pbs/11.3.0.121723&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From the list, we see that the Cray Programming Environment (&amp;quot;PrgEnv-cray/4.0.46&amp;quot;) and the Cray Compiler&lt;br /&gt;
environment are loaded (&amp;quot;cce/8.0.7&amp;quot;) by default among other things (SLURM, MPICH, etc.).  To unload these&lt;br /&gt;
Cray modules and load in the Portland Group (PGI) equivalents we need to know the names of the PGI &lt;br /&gt;
modules.   The &amp;quot;module avail&amp;quot; command will tell us this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module avail&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
(several sections of output removed)&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
------------------------------------------------ /opt/modulefiles -----------------------------------------------------&lt;br /&gt;
Base-opts/1.0.2-1.0400.31284.2.2.gem(default)     gcc/4.1.2                                         pbs/11.2.0.113417&lt;br /&gt;
PrgEnv-cray/3.1.61                                gcc/4.2.4                                         pbs/11.3.0.121723(default)&lt;br /&gt;
PrgEnv-cray/4.0.46(default)                       gcc/4.4.2                                         petsc/3.1.08&lt;br /&gt;
PrgEnv-gnu/3.1.61                                 gcc/4.4.4                                         petsc/3.1.09&lt;br /&gt;
PrgEnv-gnu/4.0.46(default)                        gcc/4.5.1                                         petsc-complex/3.1.08&lt;br /&gt;
PrgEnv-intel/3.1.61                               gcc/4.5.2                                         petsc-complex/3.1.09&lt;br /&gt;
PrgEnv-intel/4.0.46(default)                      gcc/4.5.3                                         pgi/12.10&lt;br /&gt;
PrgEnv-pathscale/3.1.61                           gcc/4.6.1                                         pgi/12.3&lt;br /&gt;
PrgEnv-pathscale/4.0.46(default)                  gcc/4.7.1(default)                                pgi/12.6(default)&lt;br /&gt;
PrgEnv-pgi/3.1.61                                 hss-llm/6.0.0(default)                            pgi/12.8&lt;br /&gt;
PrgEnv-pgi/4.0.46(default)                        intel/12.1.1.256                                  wrf/3.3.0&lt;br /&gt;
acml/4.4.0                                        intel/12.1.4.319(default)                         wrf/3.4.0(default)&lt;br /&gt;
acml/5.1.0(default)                               intel/12.1.5.339                                  xt-asyncpe/5.01&lt;br /&gt;
admin-modules/1.0.2-1.0400.31284.2.2.gem(default) java/jdk1.6.0_24                                  xt-asyncpe/5.05&lt;br /&gt;
amber/12(default)                                 java/jdk1.7.0_03(default)                         xt-asyncpe/5.13(default)&lt;br /&gt;
cce/8.0.7(default)                                mazama/6.0.0(default)                             xt-libsci/11.0.00&lt;br /&gt;
chapel/1.4.0                                      modules/3.2.6.6(default)                          xt-libsci/11.0.04&lt;br /&gt;
chapel/1.5.0(default)                             mrnet/3.0.0(default)                              xt-libsci/11.1.00(default)&lt;br /&gt;
fftw/2.1.5.3                                      pathscale/4.0.12.1(default)                       xt-papi/4.2.0&lt;br /&gt;
fftw/3.2.2.1(default)                             pathscale/4.0.9                                   xt-papi/4.3.0(default)&lt;br /&gt;
fftw/3.3.0.1                                      pbs/11.1.0.111761&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are several versions of the PGI compilers and two version of the PGI Programming Environment for&lt;br /&gt;
the Cray (SALK).   We are interested in loading PGI&#039;s 12.10 release (not the default which is &amp;quot;pgi/12.6&amp;quot;) and the&lt;br /&gt;
most current release of the PGI programming environment (&amp;quot;PrgEnv-pgi/4.0.46&amp;quot;), which is the default.  The &lt;br /&gt;
PGI programming environment for the Cray provides all the environmental settings required to use the &lt;br /&gt;
PGI compilers on the Cray which includes a number of custom libraries.  &lt;br /&gt;
&lt;br /&gt;
Here is a series of module commands to unload the Cray defaults, load the PGI modules mentioned,&lt;br /&gt;
and load version 4.2.0 of NETCDF compiled with the PGI compilers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module unload PrgEnv-cray&lt;br /&gt;
user@salk:~&amp;gt; module load PrgEnv-pgi&lt;br /&gt;
user@salk:~&amp;gt; module unload pgi&lt;br /&gt;
user@salk:~&amp;gt; module load pgi/12.10&lt;br /&gt;
user@salk:~&amp;gt; &lt;br /&gt;
user@salk:~&amp;gt; module load netcdf/4.2.0&lt;br /&gt;
user@salk:~&amp;gt;&lt;br /&gt;
user@salk;~&amp;gt; cc -V&lt;br /&gt;
/opt/cray/xt-asyncpe/5.13/bin/cc: INFO: Compiling with CRAYPE_COMPILE_TARGET=native.&lt;br /&gt;
&lt;br /&gt;
pgcc 12.10-0 64-bit target on x86-64 Linux &lt;br /&gt;
Copyright 1989-2000, The Portland Group, Inc.  All Rights Reserved.&lt;br /&gt;
Copyright 2000-2012, STMicroelectronics, Inc.  All Rights Reserved.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Several comments about this series of command are perhaps useful.  First, the first three commands&lt;br /&gt;
do not include version numbers and will therefore load or unload the current default versions.  In &lt;br /&gt;
the third line, we unload the default version of the PGI compiler (version 12.6) which is loaded with&lt;br /&gt;
the rest of the PGI Programming Environment in the second line.  We then load the non-default&lt;br /&gt;
and more recent release from PGI, version 12.10 in the fourth line.   Later, we load NETCDF version&lt;br /&gt;
4.2.0 which, because we have already loaded the PGI Programming Environment, will load the version&lt;br /&gt;
of NETCDF 4.2.0 compiled with the PGI compilers.  Finally, we check to see which compiler the Cray &amp;quot;cc&amp;quot;&lt;br /&gt;
compiler wrapper actually invokes after this sequence of module commands.  We see that indeed &amp;quot;pgcc&amp;quot;&lt;br /&gt;
version 12.10 is being used.&lt;br /&gt;
&lt;br /&gt;
We can confirm all this by again entering &amp;quot;module list&amp;quot;.   Notice that the Cray-related compiler modules&lt;br /&gt;
have been replaced by those from PGI and that NETCDF version 4.2.0 has been loaded.  We are ready&lt;br /&gt;
to use new PGI compiler suite based environment.  It is left as an exercise to the reader to figure out&lt;br /&gt;
how the series of commands listed above could have been shortened by using the &amp;quot;module swap&amp;quot; sub-&lt;br /&gt;
command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module list&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) modules/3.2.6.6&lt;br /&gt;
  2) nodestat/2.2-1.0400.31264.2.5.gem&lt;br /&gt;
  3) sdb/1.0-1.0400.32124.7.19.gem&lt;br /&gt;
  4) MySQL/5.0.64-1.0000.5053.22.1&lt;br /&gt;
  5) lustre-cray_gem_s/1.8.6_2.6.32.45_0.3.2_1.0400.6453.5.1-1.0400.32127.1.90&lt;br /&gt;
  6) udreg/2.3.1-1.0400.4264.3.1.gem&lt;br /&gt;
  7) ugni/2.3-1.0400.4374.4.88.gem&lt;br /&gt;
  8) gni-headers/2.1-1.0400.4351.3.1.gem&lt;br /&gt;
  9) dmapp/3.2.1-1.0400.4255.2.159.gem&lt;br /&gt;
 10) xpmem/0.1-2.0400.31280.3.1.gem&lt;br /&gt;
 11) hss-llm/6.0.0&lt;br /&gt;
 12) Base-opts/1.0.2-1.0400.31284.2.2.gem&lt;br /&gt;
 13) xtpe-network-gemini&lt;br /&gt;
 14) xtpe-mc8&lt;br /&gt;
 15) cray-mpich2/5.5.3&lt;br /&gt;
 16) pbs/11.3.0.121723&lt;br /&gt;
 17) xt-libsci/11.1.00&lt;br /&gt;
 18) pmi/3.0.0-1.0000.8661.28.2807.gem&lt;br /&gt;
 19) xt-asyncpe/5.13&lt;br /&gt;
 20) atp/1.5.1&lt;br /&gt;
 21) PrgEnv-pgi/4.0.46&lt;br /&gt;
 22) pgi/12.10&lt;br /&gt;
 23) hdf5/1.8.8&lt;br /&gt;
 24) netcdf/4.2.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications_Environment&amp;diff=147</id>
		<title>Applications Environment</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications_Environment&amp;diff=147"/>
		<updated>2022-10-27T20:12:45Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= [[Using Modules to Run your Applications]] =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Modules is a software package that provides for the fast and convenient management of the components of&lt;br /&gt;
a user&#039;s environment via &#039;&#039;&#039;modulefiles&#039;&#039;&#039;.  When executed by the module command each module file fully &lt;br /&gt;
configures the environment for its associated application or application group. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The modules configuration language allows for the management of applications environment conflicts and dependencies as well.&lt;br /&gt;
The modules software allows users to load (and unload and reload) an application and/or system environment&lt;br /&gt;
that is specific to their needs and avoids the need to set and manage a large, one-size-fits-all, generic environment&lt;br /&gt;
for everyone at login.  &lt;br /&gt;
&lt;br /&gt;
Modules is the default approach to managing the user applications environment.  CUNY HPC Center system  BOB, currently used almost entirely for Gaussian jobs&lt;br /&gt;
will NOT be reconfigured with the modules software.  Module version 3.2.9 is the default on the CUNY HPC Center systems.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Modules, Learning by Example &#039;&#039;&#039;&lt;br /&gt;
** Example 1,  Basic Non-Cray System &lt;br /&gt;
**Example 2,  Less Basic From SALK (Cray System)&lt;br /&gt;
&lt;br /&gt;
Using the module package users can easily set a collection of environmental variables that are specific to their&lt;br /&gt;
compilation, parallel programming, and/or application requirements on the HPC Center&#039;s systems. The modules system&lt;br /&gt;
also makes it convenient to advance or regress compiler, parallel programming, or applications versions when defaults&lt;br /&gt;
are found to have bugs or performance issues.  Whatever the task, the modules package can adjust the environment&lt;br /&gt;
in an orderly way altering or setting of such environmental variables as PATH, MANPATH, LD_LIBRARY_PATH, etc.&lt;br /&gt;
and providing some basic descriptive information about the application version being loaded and purpose of the&lt;br /&gt;
modules file through the module help facility.   &lt;br /&gt;
&lt;br /&gt;
In addition to each application-specific modulefile, the module package functions through the use of a collection of&lt;br /&gt;
sub-commands given after the initial module command itself as in &amp;quot;module list&amp;quot; for instance.  All these module sub-&lt;br /&gt;
command are described in detail in the module man page (&amp;quot;man module&amp;quot;), but a list of some of the more important&lt;br /&gt;
and commonly used sub-commands is provided here:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Module sub-commands:&lt;br /&gt;
&lt;br /&gt;
list&lt;br /&gt;
load&lt;br /&gt;
unload&lt;br /&gt;
switch&lt;br /&gt;
avail&lt;br /&gt;
show&lt;br /&gt;
help&lt;br /&gt;
purge&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Modules, Learning by Example ==&lt;br /&gt;
The best way to explain how to use the modules package and its sub-command is to consider some simple&lt;br /&gt;
examples of a typical workflows that involve modules.  Here are two examples.  Again, for a more complete&lt;br /&gt;
description of the modules package please refer to &amp;quot;man module&amp;quot;.&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
=== Example 1,  Basic Non-Cray System ===&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;Consider the unmodified PATH variable right after login to one of the CUNY HPC Center systems.&lt;br /&gt;
Without any custom or local environmental path settings, it would look something like this with no&lt;br /&gt;
compiler, parallel programming model, or application-specific information in it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; echo $PATH | tr -s &#039;:&#039; &#039;\n&#039;&lt;br /&gt;
/scratch/username/bin&lt;br /&gt;
/usr/local/bin&lt;br /&gt;
/usr/bin&lt;br /&gt;
/bin&lt;br /&gt;
/usr/bin/X11&lt;br /&gt;
/usr/X11R6/bin&lt;br /&gt;
/usr/games&lt;br /&gt;
/opt/c3/bin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We take note that there appears to be no path to the application that we are interested in running which is Wolfram&#039;s&lt;br /&gt;
Mathematica in this example.  Typing &amp;quot;which math&amp;quot; to find Mathematica (&amp;quot;math&amp;quot; is the command-line name for Mathematica)&lt;br /&gt;
at the terminal yields:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
username@service0:~&amp;gt;  which math&lt;br /&gt;
which: no math in (/scratch/username/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/opt/c3/bin)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Mathematica executable &amp;quot;math&amp;quot; is not found in the default PATH variabl defined by the system at login. Modules can be&lt;br /&gt;
used to remedy this problem by adding the required path.  To check which module files (if any) are already loaded into&lt;br /&gt;
our environment, we are can type the &amp;quot;module list&amp;quot; sub-command at the terminal prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; module list&lt;br /&gt;
No Modulefiles Currently Loaded.&lt;br /&gt;
username@service0:~&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
No modules loaded.  So the module file for Mathematica has not been loaded and it is no surprise&lt;br /&gt;
that the Mathematica command-line &amp;quot;math&amp;quot; was not found.  The next question is has the HPC Center&lt;br /&gt;
installed Mathematica on this system and created a module file for it?  To find this out we use &lt;br /&gt;
the &amp;quot;module avail&amp;quot; sub-command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; module avail&lt;br /&gt;
---------------------------- /share/apps/modules/default/modulefiles_UserApplications --------------------------------------&lt;br /&gt;
&lt;br /&gt;
adf/2012.01(default)         cesm/1.0.3                   hoomd/0.9.2(default)         ncar/5.2.0_NCL(default)      pgi/12.3(default)&lt;br /&gt;
auto3dem/4.02(default)       cesm/1.0.4(default)          intel/12.1.3.293(default)    nwchem/6.1.1(default)        phoenics/2009(default)&lt;br /&gt;
autodock/4.2.3(default)      cuda/5.0(default)            ls-dyna/6.0.0(default)       octopus/4.0.0(default)       r/2.14.1(default)&lt;br /&gt;
beagle/0.2(default)          gromacs/4.5.5_32bit          mathematica/8.0.4(default)   openmpi/1.5.5_intel(default) wrf/3.4.0(default)&lt;br /&gt;
best/2.2L(default)           gromacs/4.5.5_64bit(default) matlab/R2012a(default)       openmpi/1.5.5_pgi&lt;br /&gt;
&lt;br /&gt;
--------------------------------- /share/apps/modules/default/modulefiles_System -------------------------------------------&lt;br /&gt;
&lt;br /&gt;
module-info   modules       version/3.2.9&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The listing shows all available module files on this system, both those that are user-application related&lt;br /&gt;
and those that are more system related.  As shown in the output, these two types of module files are &lt;br /&gt;
stored in different directories. Looking through the application list, there is a module for Mathematica&lt;br /&gt;
version 8.0.4, which is also happens to be the default.  On this system, the modules package has only&lt;br /&gt;
just been installed, and therefore only one version of each application has been adapted to the module&lt;br /&gt;
system and that version is the default.&lt;br /&gt;
&lt;br /&gt;
The module file that is responsible for setting up correct environment needed to run Mathematica can &lt;br /&gt;
now be loaded:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mathematica&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Because there is only one version and it is the default, there is no need to include the version-specific&lt;br /&gt;
extension to load it.   To explicitly load version 8.0.4 (or any other specific and non-default version)&lt;br /&gt;
one would use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mathematica/8.0.4&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After loading, the environmental PATH variable includes the path to Mathematica:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; echo $PATH | tr -s &#039;:&#039; &#039;\n&#039;&lt;br /&gt;
/scratch/username/bin&lt;br /&gt;
/usr/local/bin&lt;br /&gt;
/usr/bin&lt;br /&gt;
/bin&lt;br /&gt;
/usr/bin/X11&lt;br /&gt;
/usr/X11R6/bin&lt;br /&gt;
/usr/games&lt;br /&gt;
/opt/c3/bin&lt;br /&gt;
/share/apps/mathematica/8.0.4/Executables&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be verified by rerunning the &amp;quot;which math&amp;quot; command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; which math&lt;br /&gt;
/share/apps/mathematica/8.0.4/Executables/math&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the head or login node enviroment variables are properly set, one can create a SLURM script&lt;br /&gt;
to run an Mathematica job on a compute node and ensure that the head or login node environment&lt;br /&gt;
just set is passed on to the compute nodes by using the &amp;quot;#SLURM -V&amp;quot; option inside you SLURM script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -N mmat8_serial1&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -l select=1:ncpus=1:mem=1920mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
# Just point to the serial executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin Mathematica Serial Run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
math -run &amp;lt;test_run.nb &amp;gt; output&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   Mathematica Serial Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the PATH variable in the login environment is now includes the location of the Mathematica &lt;br /&gt;
executable and the &amp;quot;#SLURM -V&amp;quot; option ensures that this is passed to the compute node that the&lt;br /&gt;
job is run on, the last line of the SLURM script will be executed without environment-related problems.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
=== Example 2,  Less Basic From SALK (Cray System) ===&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;As do all of the systems at the CUNY HPC Center, the Cray SALK has multiple compiler, parallel programming&lt;br /&gt;
models, libraries, and applications.  In addition, SALK uses a custom high-performance interconnect with its&lt;br /&gt;
own libraries, has its own compiler suite and compiling system, and many other custom libraries.  Setting up&lt;br /&gt;
and/or tearing down a given environment that makes all this work correctly is more complicated that it is on &lt;br /&gt;
the other systems at the HPC Center.  Modules simplifies this process tremendously for the user.&lt;br /&gt;
&lt;br /&gt;
Here is an example of how to swap out the default Cray compiler environment on SALK and swap in the &lt;br /&gt;
compiler suite from the Portland Group including all the right MPI libraries from Cray.  In this case, we swap in&lt;br /&gt;
a new release of the Portland Group compilers, not the current default on the Cray, and the version of the &lt;br /&gt;
NETCDF library that has been compiled with the Portland group.&lt;br /&gt;
&lt;br /&gt;
Having logged into SALK, we determine what modules have been load by default with &amp;quot;module list&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module list&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) modules/3.2.6.6&lt;br /&gt;
  2) nodestat/2.2-1.0400.31264.2.5.gem&lt;br /&gt;
  3) sdb/1.0-1.0400.32124.7.19.gem&lt;br /&gt;
  4) MySQL/5.0.64-1.0000.5053.22.1&lt;br /&gt;
  5) lustre-cray_gem_s/1.8.6_2.6.32.45_0.3.2_1.0400.6453.5.1-1.0400.32127.1.90&lt;br /&gt;
  6) udreg/2.3.1-1.0400.4264.3.1.gem&lt;br /&gt;
  7) ugni/2.3-1.0400.4374.4.88.gem&lt;br /&gt;
  8) gni-headers/2.1-1.0400.4351.3.1.gem&lt;br /&gt;
  9) dmapp/3.2.1-1.0400.4255.2.159.gem&lt;br /&gt;
 10) xpmem/0.1-2.0400.31280.3.1.gem&lt;br /&gt;
 11) hss-llm/6.0.0&lt;br /&gt;
 12) Base-opts/1.0.2-1.0400.31284.2.2.gem&lt;br /&gt;
 13) xtpe-network-gemini&lt;br /&gt;
 14) cce/8.0.7&lt;br /&gt;
 15) acml/5.1.0&lt;br /&gt;
 16) xt-libsci/11.1.00&lt;br /&gt;
 17) pmi/3.0.0-1.0000.8661.28.2807.gem&lt;br /&gt;
 18) rca/1.0.0-2.0400.31553.3.58.gem&lt;br /&gt;
 19) xt-asyncpe/5.13&lt;br /&gt;
 20) atp/1.5.1&lt;br /&gt;
 21) PrgEnv-cray/4.0.46&lt;br /&gt;
 22) xtpe-mc8&lt;br /&gt;
 23) cray-mpich2/5.5.3&lt;br /&gt;
 24) SLURM/11.3.0.121723&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From the list, we see that the Cray Programming Environment (&amp;quot;PrgEnv-cray/4.0.46&amp;quot;) and the Cray Compiler&lt;br /&gt;
environment are loaded (&amp;quot;cce/8.0.7&amp;quot;) by default among other things (SLURM, MPICH, etc.).  To unload these&lt;br /&gt;
Cray modules and load in the Portland Group (PGI) equivalents we need to know the names of the PGI &lt;br /&gt;
modules.   The &amp;quot;module avail&amp;quot; command will tell us this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module avail&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
(several sections of output removed)&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
------------------------------------------------ /opt/modulefiles -----------------------------------------------------&lt;br /&gt;
Base-opts/1.0.2-1.0400.31284.2.2.gem(default)     gcc/4.1.2                                         SLURM/11.2.0.113417&lt;br /&gt;
PrgEnv-cray/3.1.61                                gcc/4.2.4                                         SLURM/11.3.0.121723(default)&lt;br /&gt;
PrgEnv-cray/4.0.46(default)                       gcc/4.4.2                                         petsc/3.1.08&lt;br /&gt;
PrgEnv-gnu/3.1.61                                 gcc/4.4.4                                         petsc/3.1.09&lt;br /&gt;
PrgEnv-gnu/4.0.46(default)                        gcc/4.5.1                                         petsc-complex/3.1.08&lt;br /&gt;
PrgEnv-intel/3.1.61                               gcc/4.5.2                                         petsc-complex/3.1.09&lt;br /&gt;
PrgEnv-intel/4.0.46(default)                      gcc/4.5.3                                         pgi/12.10&lt;br /&gt;
PrgEnv-pathscale/3.1.61                           gcc/4.6.1                                         pgi/12.3&lt;br /&gt;
PrgEnv-pathscale/4.0.46(default)                  gcc/4.7.1(default)                                pgi/12.6(default)&lt;br /&gt;
PrgEnv-pgi/3.1.61                                 hss-llm/6.0.0(default)                            pgi/12.8&lt;br /&gt;
PrgEnv-pgi/4.0.46(default)                        intel/12.1.1.256                                  wrf/3.3.0&lt;br /&gt;
acml/4.4.0                                        intel/12.1.4.319(default)                         wrf/3.4.0(default)&lt;br /&gt;
acml/5.1.0(default)                               intel/12.1.5.339                                  xt-asyncpe/5.01&lt;br /&gt;
admin-modules/1.0.2-1.0400.31284.2.2.gem(default) java/jdk1.6.0_24                                  xt-asyncpe/5.05&lt;br /&gt;
amber/12(default)                                 java/jdk1.7.0_03(default)                         xt-asyncpe/5.13(default)&lt;br /&gt;
cce/8.0.7(default)                                mazama/6.0.0(default)                             xt-libsci/11.0.00&lt;br /&gt;
chapel/1.4.0                                      modules/3.2.6.6(default)                          xt-libsci/11.0.04&lt;br /&gt;
chapel/1.5.0(default)                             mrnet/3.0.0(default)                              xt-libsci/11.1.00(default)&lt;br /&gt;
fftw/2.1.5.3                                      pathscale/4.0.12.1(default)                       xt-papi/4.2.0&lt;br /&gt;
fftw/3.2.2.1(default)                             pathscale/4.0.9                                   xt-papi/4.3.0(default)&lt;br /&gt;
fftw/3.3.0.1                                      SLURM/11.1.0.111761&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are several versions of the PGI compilers and two version of the PGI Programming Environment for&lt;br /&gt;
the Cray (SALK).   We are interested in loading PGI&#039;s 12.10 release (not the default which is &amp;quot;pgi/12.6&amp;quot;) and the&lt;br /&gt;
most current release of the PGI programming environment (&amp;quot;PrgEnv-pgi/4.0.46&amp;quot;), which is the default.  The &lt;br /&gt;
PGI programming environment for the Cray provides all the environmental settings required to use the &lt;br /&gt;
PGI compilers on the Cray which includes a number of custom libraries.  &lt;br /&gt;
&lt;br /&gt;
Here is a series of module commands to unload the Cray defaults, load the PGI modules mentioned,&lt;br /&gt;
and load version 4.2.0 of NETCDF compiled with the PGI compilers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module unload PrgEnv-cray&lt;br /&gt;
user@salk:~&amp;gt; module load PrgEnv-pgi&lt;br /&gt;
user@salk:~&amp;gt; module unload pgi&lt;br /&gt;
user@salk:~&amp;gt; module load pgi/12.10&lt;br /&gt;
user@salk:~&amp;gt; &lt;br /&gt;
user@salk:~&amp;gt; module load netcdf/4.2.0&lt;br /&gt;
user@salk:~&amp;gt;&lt;br /&gt;
user@salk;~&amp;gt; cc -V&lt;br /&gt;
/opt/cray/xt-asyncpe/5.13/bin/cc: INFO: Compiling with CRAYPE_COMPILE_TARGET=native.&lt;br /&gt;
&lt;br /&gt;
pgcc 12.10-0 64-bit target on x86-64 Linux &lt;br /&gt;
Copyright 1989-2000, The Portland Group, Inc.  All Rights Reserved.&lt;br /&gt;
Copyright 2000-2012, STMicroelectronics, Inc.  All Rights Reserved.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Several comments about this series of command are perhaps useful.  First, the first three commands&lt;br /&gt;
do not include version numbers and will therefore load or unload the current default versions.  In &lt;br /&gt;
the third line, we unload the default version of the PGI compiler (version 12.6) which is loaded with&lt;br /&gt;
the rest of the PGI Programming Environment in the second line.  We then load the non-default&lt;br /&gt;
and more recent release from PGI, version 12.10 in the fourth line.   Later, we load NETCDF version&lt;br /&gt;
4.2.0 which, because we have already loaded the PGI Programming Environment, will load the version&lt;br /&gt;
of NETCDF 4.2.0 compiled with the PGI compilers.  Finally, we check to see which compiler the Cray &amp;quot;cc&amp;quot;&lt;br /&gt;
compiler wrapper actually invokes after this sequence of module commands.  We see that indeed &amp;quot;pgcc&amp;quot;&lt;br /&gt;
version 12.10 is being used.&lt;br /&gt;
&lt;br /&gt;
We can confirm all this by again entering &amp;quot;module list&amp;quot;.   Notice that the Cray-related compiler modules&lt;br /&gt;
have been replaced by those from PGI and that NETCDF version 4.2.0 has been loaded.  We are ready&lt;br /&gt;
to use new PGI compiler suite based environment.  It is left as an exercise to the reader to figure out&lt;br /&gt;
how the series of commands listed above could have been shortened by using the &amp;quot;module swap&amp;quot; sub-&lt;br /&gt;
command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module list&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) modules/3.2.6.6&lt;br /&gt;
  2) nodestat/2.2-1.0400.31264.2.5.gem&lt;br /&gt;
  3) sdb/1.0-1.0400.32124.7.19.gem&lt;br /&gt;
  4) MySQL/5.0.64-1.0000.5053.22.1&lt;br /&gt;
  5) lustre-cray_gem_s/1.8.6_2.6.32.45_0.3.2_1.0400.6453.5.1-1.0400.32127.1.90&lt;br /&gt;
  6) udreg/2.3.1-1.0400.4264.3.1.gem&lt;br /&gt;
  7) ugni/2.3-1.0400.4374.4.88.gem&lt;br /&gt;
  8) gni-headers/2.1-1.0400.4351.3.1.gem&lt;br /&gt;
  9) dmapp/3.2.1-1.0400.4255.2.159.gem&lt;br /&gt;
 10) xpmem/0.1-2.0400.31280.3.1.gem&lt;br /&gt;
 11) hss-llm/6.0.0&lt;br /&gt;
 12) Base-opts/1.0.2-1.0400.31284.2.2.gem&lt;br /&gt;
 13) xtpe-network-gemini&lt;br /&gt;
 14) xtpe-mc8&lt;br /&gt;
 15) cray-mpich2/5.5.3&lt;br /&gt;
 16) SLURM/11.3.0.121723&lt;br /&gt;
 17) xt-libsci/11.1.00&lt;br /&gt;
 18) pmi/3.0.0-1.0000.8661.28.2807.gem&lt;br /&gt;
 19) xt-asyncpe/5.13&lt;br /&gt;
 20) atp/1.5.1&lt;br /&gt;
 21) PrgEnv-pgi/4.0.46&lt;br /&gt;
 22) pgi/12.10&lt;br /&gt;
 23) hdf5/1.8.8&lt;br /&gt;
 24) netcdf/4.2.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Applications =&lt;br /&gt;
&lt;br /&gt;
This an overview of the user-level HPC applications supported by the&lt;br /&gt;
HPC Center staff for the benefit of the entire CUNY HPC user community. &lt;br /&gt;
A user can chose to install any application that they are licensed for on&lt;br /&gt;
their own account, or appeal (based on general interest) to have it installed&lt;br /&gt;
by HPC Center staff in the shared system directory (usually /shared/apps). &lt;br /&gt;
&lt;br /&gt;
Not every user-level application is installed on every system.  This is because&lt;br /&gt;
system architectural differences, load-balancing considerations, licensing&lt;br /&gt;
limitations, the time required to maintain them, and other factors, sometimes&lt;br /&gt;
dictate otherwise.  Here, we present the current CUNY HPC Center user-level&lt;br /&gt;
application catalogue and note the system on which each application is installed&lt;br /&gt;
and licensed to run.&lt;br /&gt;
&lt;br /&gt;
We encourage the CUNY HPC community to help the HPC Center staff create a&lt;br /&gt;
applications catalogue that is closely tuned to the needs of the community. As&lt;br /&gt;
such, we hope that users will solicit staff-help in growing our application install&lt;br /&gt;
base to suite the needs of the community whatever the application discipline (CAE, &lt;br /&gt;
CFD, COMPCHEM, QCD, BIOINFORMATICS, etc.). The CUNY HPC will do the best to &lt;br /&gt;
try and satisfy reasonable software requests. &lt;br /&gt;
&amp;lt;br/&amp;gt;Software requests must be submitted by Supervisors and/or PI&#039;s only. Users can install applications in their own home directory as needed. &lt;br /&gt;
&lt;br /&gt;
Unless otherwise noted, all applications built locally were built using our default&lt;br /&gt;
Intel-OpenMPI applications stack.  Furthermore, the SLURM Pro job submission scripts&lt;br /&gt;
below are promised to work (at the time this section of the Wiki was written), but the&lt;br /&gt;
number of processors (cores), memory, and process placement defined in the &lt;br /&gt;
example scripts is not necessarily optimal for wall-clock or cpu-time performance.&lt;br /&gt;
The user should use their knowledge of the application, the system, and the benefit&lt;br /&gt;
of their experience to choose the optimal combination of processors and memory &lt;br /&gt;
for their scripts.  Details on how to make full use of the SLURM Pro job submission &lt;br /&gt;
options are covered in the SLURM Pro section below. &lt;br /&gt;
&lt;br /&gt;
=== ADCIRC ===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===AMBER (Assisted Model Building with Energy Refinement) ===&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
=== ANVIO ===&lt;br /&gt;
&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for genomics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
=== AUGUSTUS ===&lt;br /&gt;
&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
=== AUTODOCK ===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===BAMOVA===&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
===BAYESCAN===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from &lt;br /&gt;
genetic data, using differences in allele frequencies between populations.  BayeScan is &lt;br /&gt;
based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an&lt;br /&gt;
island model in which subpopulation allele frequencies are correlated through a common &lt;br /&gt;
migrant gene pool from which they differ in varying degrees.  The difference in allele frequency &lt;br /&gt;
between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios &lt;br /&gt;
where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html]&lt;br /&gt;
and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our &lt;br /&gt;
installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===BEAST===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===BEST===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===BOWTIE2 ===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===BPP2===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== BROWNIE ===&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
===CGAL===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== CONSED ===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== CP2K ===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== CUFFLINKS ===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== DL_POLY ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===ExaML===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===ExaBayes===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N &amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SLURM -l select=1:ncpus=2 &lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===FDPPDIV===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GAMESS-US===&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GARLI===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GAUSS===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY and BOB. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Gaussian09 ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use on Andy. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GMP ===&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
===Gnuplot===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:* Karle under /usr/bin/gnuplot&lt;br /&gt;
:* Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
:* Bob under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/  gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GENOMEPOP2===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GROMACS===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GPAW===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Hapsembler===&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&lt;br /&gt;
Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===HOOMD===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&lt;br /&gt;
HOOMD&#039;s object-oriented design patterns make it both versatile and expandable. Various types of potentials, integration methods&lt;br /&gt;
and file formats are currently supported, and more are added with each release. The code is available and open source, so anyone&lt;br /&gt;
can write a plugin or change the source to add additional functionality.  Simulations are configured and run using simple python&lt;br /&gt;
scripts, allowing complete control over the force field choice, integrator, all parameters, how many time steps are run, etc.&lt;br /&gt;
The scripting system is designed to be as simple as possible to the non-programmer.&lt;br /&gt;
&lt;br /&gt;
The HOOMD development effort is led by the Glotzer group at the University of Michigan, but many groups from different universities&lt;br /&gt;
have contributed code that is now part of the HOOMD main package, see the credits page for the full list. The HOOMD website and&lt;br /&gt;
documentation are available here [http://codeblue.umich.edu/hoomd-blue/about.html]. More information about our installation can be&lt;br /&gt;
found here [[HOOMD]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===HOPSPACK===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===HUMAnN2===&lt;br /&gt;
&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). &lt;br /&gt;
&lt;br /&gt;
Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===IMa2===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===I-TASSER===&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===JULIA===&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments.&lt;br /&gt;
&lt;br /&gt;
Julia is installed on Penzias. &lt;br /&gt;
&lt;br /&gt;
===HONDO PLUS===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===LAMARC===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===LAMMPS===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===LS-DYNA===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===MAGMA===&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
===MATHEMATICA===&lt;br /&gt;
&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. Mathematica version 10.0 is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&lt;br /&gt;
===MATLAB===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Migrate===&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===MPFR===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===MRBAYES===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===msABC===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===MSMS===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===NAMD===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Network Simulator-2 (NS2) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===NWChem===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Octopus===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===OpenMM===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===OpenFOAM===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===OpenSees===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSees, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSees is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===ORCA===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===ParGAP===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===POPABC===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===PHOENICS===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== PHRAP-PHRED ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===PyRAD===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
* Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installing Python packages ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Users may install python packages/modules in their own space.  Many packages available in Python repositories can be installed easily with PIP manager, which is available  in any of Anaconda and Miniconda builds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Users must remember that using PIP without first loading the module for python will cause the installed modules to match system python on login node only. However the python interpreter available (after login module) on all nodes is installed in /share/usr/compilers/python space. Thus when installing packages in user space it is very important to follow the procedure outlined below. The given example demonstrates how users can install package &amp;quot;guppy&amp;quot; in their own space:&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Anaconda build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/2.7.13_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Anaconda build&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/3.6.0_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Miniconda&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda2&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Miniconda 3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda3&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check if the package is properly installed type:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip list | grep guppy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== QIIME ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== R ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
====General Notes====&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===RAXML===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== SAGE ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== SAMTOOLS ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== SAS ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:* data entry, retrieval, management, and mining&lt;br /&gt;
:* report writing and graphics&lt;br /&gt;
:* statistical analysis&lt;br /&gt;
:* business planning, forecasting, and decision support&lt;br /&gt;
:* operations research and project management&lt;br /&gt;
:* quality improvement&lt;br /&gt;
:* applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Stata/MP ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Structurama===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Structure===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on BOB at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Thrust Library (CUDA)===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== TOPHAT ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Trinity ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===USEARCH===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===VELVET===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the &lt;br /&gt;
European Bioinformatics Institute, Cambridge, UK.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[VELVET]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===VSEARCH===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VMD ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== WRF ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Xmgrace ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MET (Model Evaluation Tools) ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
*Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications_Environment/ADF&amp;diff=146</id>
		<title>Applications Environment/ADF</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications_Environment/ADF&amp;diff=146"/>
		<updated>2022-10-27T20:12:38Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===ADF (Amsterdam Density Functional Theory)===&lt;br /&gt;
ADF (Amsterdam Density Functional) is a Fortran program for calculations on atoms and molecules&lt;br /&gt;
(in gas phase or solution) from first principles. It can be used for the study of such diverse fields as&lt;br /&gt;
molecular spectroscopy, organic and inorganic chemistry, crystallography and pharmacochemistry. Some&lt;br /&gt;
of its key strengths include high accuracy supported by its use of Slater-type orbitals, all-electron relativistic&lt;br /&gt;
treatment of the heavier elements, and fast parameterized DFT-based semi-empirical methods. A separate&lt;br /&gt;
program BAND is available for the study of periodic systems: crystals, surfaces, and polymers. The&lt;br /&gt;
COSMO-RS program is used for calculating thermodynamic properties of (mixed) fluids.&lt;br /&gt;
&lt;br /&gt;
The underlying theory is the Kohn-Sham approach to Density-Functional Theory (DFT). This implies&lt;br /&gt;
a one-electron picture of the many-electron systems, but yields in principle the exact electron density&lt;br /&gt;
(and related properties) and the total energy.  If ADF is a new program for you, we recommend that you&lt;br /&gt;
carefully read Chapter 1, section 1.3 &#039;Technical remarks, Terminology&#039;, which presents a discussion of&lt;br /&gt;
a few ADF-typical aspects and terminology. This will help you to understand and appreciate the output&lt;br /&gt;
of an ADF calculation.  The ADF Manual is located on the web here: [http://www.scm.com/Doc/Doc2013/ADF/ADFUsersGuide/page1.html]&lt;br /&gt;
&lt;br /&gt;
ADF 2013 (and SCM&#039;s other programs) is installed on ANDY and PENZIAS at the CUNY HPC Center.  The older 2012 version&lt;br /&gt;
is also available on ANDY server.  The current license is group-limited and allows for up to 32 cores of simultaneous ADF&lt;br /&gt;
use and 8 cores of simultaneous BAND use.  This is a floating license limited to the DDR side of ANDY&lt;br /&gt;
and the SLURM &#039;production&#039; queue.  Users not currently in the ADF group should inquire about access by sending&lt;br /&gt;
an email to &#039;hpchelp@csi.cuny.edu&#039;.&lt;br /&gt;
&lt;br /&gt;
Here is a simple ADF input deck that compute the SCF wave function for HCN.  This example can be run&lt;br /&gt;
with the SLURM script shown below on from 1 to 4 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Title    HCN Linear Transit, first part&lt;br /&gt;
NoPrint  SFO, Frag, Functions, Computation&lt;br /&gt;
&lt;br /&gt;
Atoms      Internal&lt;br /&gt;
  1 C  0 0 0       0    0    0&lt;br /&gt;
  2 N  1 0 0       1.3  0    0&lt;br /&gt;
  3 H  1 2 0       1.0  th  0&lt;br /&gt;
End&lt;br /&gt;
&lt;br /&gt;
Basis&lt;br /&gt;
 Type DZP&lt;br /&gt;
End&lt;br /&gt;
&lt;br /&gt;
Symmetry NOSYM&lt;br /&gt;
&lt;br /&gt;
Integration 6.0 6.0&lt;br /&gt;
&lt;br /&gt;
Geometry&lt;br /&gt;
  Branch Old&lt;br /&gt;
  LinearTransit  10&lt;br /&gt;
  Iterations     30  4&lt;br /&gt;
  Converge   Grad=3e-2,  Rad=3e-2,  Angle=2&lt;br /&gt;
END&lt;br /&gt;
&lt;br /&gt;
Geovar&lt;br /&gt;
  th   180    0&lt;br /&gt;
End&lt;br /&gt;
&lt;br /&gt;
End Input&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A SLURM script (&#039;adf_4.job&#039;) configured to use 4 cores is shown here.  Note that ADF does not use the &lt;br /&gt;
version of MPI that the HPC Center supports by default.  ADF used the proprietary version&lt;br /&gt;
of MPI from SGI that is part of SGI&#039;s MPT parallel library package.  This script includes special&lt;br /&gt;
lines to configure the run for this.  A side effect of this fact is that ADF jobs will not clock&lt;br /&gt;
time in SLURM as shown under the &#039;Time&#039; column when your job is being checked with &#039;qstat&#039;&lt;br /&gt;
&lt;br /&gt;
To include all required environmental variables and the path to the ADF executable&lt;br /&gt;
run the modules load command (the modules utility is discussed in detail above):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load adf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script runs a 4-cpu (core) ADF job using the group&lt;br /&gt;
# license of Dr. Vittadello, Dr. Birke, and the CUNY HPC&lt;br /&gt;
# Center. This script requests only one half of the resources &lt;br /&gt;
# on an ANDY compute node (4 cores, 1 half its memory). &lt;br /&gt;
#&lt;br /&gt;
# The HCN_4P.inp deck in this directory is configured to work&lt;br /&gt;
# with these resources, although this computation is really &lt;br /&gt;
# too small to make full use of them. To increase or decrease&lt;br /&gt;
# the resources SLURM requests (cpus, memory, or disk) change the &lt;br /&gt;
# &#039;-l select&#039; line below and the parameter values in the input deck.&lt;br /&gt;
#&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N adf_4P_job&lt;br /&gt;
#SLURM -l select=1:ncpus=4:mem=11520mb:lscratch=400gb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
# set environment up to use SGI&#039;s MPT version of MPI rather&lt;br /&gt;
# than the CUNY default which is OpenMPI&lt;br /&gt;
BASEPATH=/opt/sgi/mpt/mpt-2.02&lt;br /&gt;
&lt;br /&gt;
export PATH=${BASEPATH}/bin:${PATH}&lt;br /&gt;
export CPATH=${BASEPATH}/include:${CPATH}&lt;br /&gt;
export FPATH=${BASEPATH}/include:${FPATH}&lt;br /&gt;
export LD_LIBRARY_PATH=${BASEPATH}/lib:${LD_LIBRARY_PATH}&lt;br /&gt;
export LIBRARY_PATH=${BASEPATH}/lib:${LIBRARY_PATH}&lt;br /&gt;
export MPI_ROOT=${BASEPATH}&lt;br /&gt;
&lt;br /&gt;
# set the ADF root directory&lt;br /&gt;
export ADFROOT=/share/apps/adf&lt;br /&gt;
export ADFHOME=${ADFROOT}/2013.01&lt;br /&gt;
&lt;br /&gt;
# point ADF to the ADF license file&lt;br /&gt;
export SCMLICENSE=${ADFHOME}/license.txt&lt;br /&gt;
&lt;br /&gt;
# set up ADF scratch directory &lt;br /&gt;
export MY_SCRDIR=`whoami;date &#039;+%m.%d.%y_%H:%M:%S&#039;`&lt;br /&gt;
export MY_SCRDIR=`echo $MY_SCRDIR | sed -e &#039;s; ;_;&#039;`&lt;br /&gt;
export SCM_TMPDIR=/home/adf/adf_scr/${MY_SCRDIR}_$$&lt;br /&gt;
&lt;br /&gt;
mkdir -p $SCM_TMPDIR&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;The ADF scratch files for this job are in: ${SCM_TMPDIR}&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# check important paths&lt;br /&gt;
#type mpirun&lt;br /&gt;
#type adf&lt;br /&gt;
&lt;br /&gt;
# set the number processors to use in this job to 4&lt;br /&gt;
export NSCM=4&lt;br /&gt;
&lt;br /&gt;
# run the ADF job&lt;br /&gt;
echo &amp;quot;Starting ADF job ... &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
adf -n 4 &amp;lt; HCN_4P.inp &amp;gt; HCN_4P.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
# name output files&lt;br /&gt;
mv logfile HCN_4P.logfile&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;ADF job finished ... &amp;quot;&lt;br /&gt;
&lt;br /&gt;
# clean up scratch directory files&lt;br /&gt;
/bin/rm -r $SCM_TMPDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Much of this script is similar to the script that runs Gaussian jobs, but the differences should&lt;br /&gt;
should be described in some detail.  First, ADF must be submitted to the &#039;production&#039; queue &lt;br /&gt;
which is where its floating license has been limited, and where it can only use 32 cores at one&lt;br /&gt;
for ADF and 8 cores at a time for BAND.  Second, there is a block in the script that sets up the&lt;br /&gt;
environment to use the SGI proprietary version of MPI for parallel runs.  Next is the NSCM environmental&lt;br /&gt;
variable which defines the number of cores to use along with the &#039;-n&#039; option on the command&lt;br /&gt;
line.  Both of these (along with the number of cpus on the SLURM &#039;-l select&#039; line at the beginning&lt;br /&gt;
of the script) must be adjusted to control the number of cores used by the job.  &lt;br /&gt;
&lt;br /&gt;
Note the &#039;adf&#039; command is actually a script that generates and runs another script that &lt;br /&gt;
actually runs the &#039;adf.exe&#039; executable.  This script (called &#039;runscript&#039;) is built and placed&lt;br /&gt;
in the users working directory.  It typically includes some preliminary steps that are NOT&lt;br /&gt;
run in parallel.&lt;br /&gt;
&lt;br /&gt;
With the HCN input file and SLURM script above, you can submit an ADF job on ANDY with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub adf_4.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All users of ADF must be licensed and placed in the &#039;gadf&#039; Unix group by HPC Center staff.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=145</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=145"/>
		<updated>2022-10-27T20:12:22Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:CUNY-HPCC-HEADER-LOGO.jpg]]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the&lt;br /&gt;
campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314.  HPCC&lt;br /&gt;
goals are to: &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Create opportunities for the CUNY research community to develop new partnerships with the government and private sectors; and&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Data Storage and Management System (&#039;&#039;&#039;DSMS&#039;&#039;&#039;) which is mounted only on login node(s) of all servers. Consequently, no jobs can be started directly from &#039;&#039;&#039;DSMS&#039;&#039;&#039; storage.  Instead, all jobs must be submitted  from  a separate (fast but small) &#039;&#039;&#039;/scratch&#039;&#039;&#039; file system mounted on all computational nodes and on all login nodes.  As the name suggests, the &#039;&#039;&#039;/scratch&#039;&#039;&#039; file system is not  home directory  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
                                   [[Image:HPCC_Chart.png]]&lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  The deployed systems include:  distributed memory (also referred to as “cluster”) computers, symmetric multiprocessor (also referred as SMP) and shared memory (also reffred as NUMA machines).  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Computational Systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU. Currently, HPCC operates 3 SMP servers named &#039;&#039;&#039;Math, Cryo &#039;&#039;&#039; and &#039;&#039;&#039;Karle&#039;&#039;&#039;. Karle is a server which does not have GPU and is used for visualizations, visual analytics  and interactive MATLAB/Mathematica jobs. &#039;&#039;&#039;Math&#039;&#039;&#039; is a condominium server without GPU as well. Cryo (CPU+GPU server) is  specialized server designed to support large scale multi-core multi-GPU jobs. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprizing a  set of SMP servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. Each SMP member of the cluster is called a node. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  The main cluster at HPCC is a hybrid (CPU+GPU) cluster called &#039;&#039;&#039;Penzias&#039;&#039;&#039;.  Sixty six (66) of Penzias nodes have 2 x GPU K20m, and the 3 fat nodes (nodes with large number of CPU-cores and memory) of the cluster do not have GPU.   In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; server called &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This server does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN&#039;&#039;&#039;) is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the systems available at the HPC Center.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!System&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores/node &amp;amp; GPU&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Multi-core Processor&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
2xK20m GPU, PCIe&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|Sand Bridge, EP 2.20 GHz&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Haswell, 2.30 GHz&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|Ivy Bridge, 3 GHz&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40 8xV100 (32GB) GPU, XSM&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|Skylake, 2.40 GHz&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Skylake, 2.10 GHz&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
2xV100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|Haswell, 2.30 GHz&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|2&lt;br /&gt;
| colspan=&amp;quot;5&amp;quot; rowspan=&amp;quot;2&amp;quot; |NA&lt;br /&gt;
|-&lt;br /&gt;
|MHN&lt;br /&gt;
|Login Nodes&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Partitions and jobs==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in partitions. The main partition is called production. This is routing partition which distributes the jobs in several sub-partitions depend on job’s requirements. Thus the serial job submitted in &#039;&#039;&#039;production&#039;&#039;&#039; will land in &#039;&#039;&#039;partsequential&#039;&#039;&#039; partition.  No SLURM Pro scripts should be ever used and all existing SLURM scripts must be converted to SLURM before use. The table below shows the limitations of the partitions.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
|-&lt;br /&gt;
|production&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partedu&lt;br /&gt;
|16&lt;br /&gt;
|2&lt;br /&gt;
|216&lt;br /&gt;
|72 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partcryo&lt;br /&gt;
|40&lt;br /&gt;
|40&lt;br /&gt;
|40&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partmath&lt;br /&gt;
|128&lt;br /&gt;
|128&lt;br /&gt;
|128&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partmatlab&lt;br /&gt;
|1972&lt;br /&gt;
|50&lt;br /&gt;
|1972&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partdev&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|4 Hours&lt;br /&gt;
|}&lt;br /&gt;
o	&#039;&#039;&#039;production&#039;&#039;&#039; is the main partition with assigned resources across all servers (except Math and Cryo).It is routing partition so the actual job(s) will be placed in proper sub-partition automatically. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
 &lt;br /&gt;
o	&#039;&#039;&#039;partedu&#039;&#039;&#039;  partition is only for education. Assigned resources are on educational server Herbert. Partedu is accessible only to students (graduate and/or undergraduate) and their professors who are registered for a class supported by HPCC. Access to this partition is limited by the duration of the class. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partcryo&#039;&#039;&#039; is partition used to start jobs on CRYO server. Users whose projects require and/or benefit from availability  of 8 GPU interconnected via SXM interface (not PCIe) must apply for access to this partition at hpchelp@csi.cuny.edu. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partmatlab&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. Note however that parallel toolbox programs  can be submitted via production partition, but only as thread parallel jobs. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
The second and fourth Tuesday mornings in the month from 8:00AM to 12PM are normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  &amp;lt;br/ &amp;gt;&lt;br /&gt;
Unplanned maintenance to remedy system related problems may be scheduled as needed.  Reasonable attempts will be made to inform users running on those systems when these needs arise.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are encouraged to read this Wiki carefully.  In particular, the sections on compiling and running&lt;br /&gt;
parallel programs, and the section on the SLURM batch queueing system will give you the essential&lt;br /&gt;
knowledge needed to use the CUNY HPC Center systems.  We have strived to maintain the most uniform&lt;br /&gt;
user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  Still, there are some differences, particularly with the SGI (ANDY) and Cray (SALK)&lt;br /&gt;
systems.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly&lt;br /&gt;
schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit&lt;br /&gt;
is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and&lt;br /&gt;
architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the&lt;br /&gt;
CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures&lt;br /&gt;
at formal classes throughout the CUNY campuses.  &lt;br /&gt;
&lt;br /&gt;
Users with further questions or requiring immediate assistance in use of the systems should create a ticket using their HPC account login at:&lt;br /&gt;
&lt;br /&gt;
   [https://hpchelp.csi.cuny.edu hpchelp.csi.cuny.edu]&lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
An old version of the user manual can be downloaded at: http://cunyhpc.csi.cuny.edu/publications/User_Manual.pdf.  Note that this manual provides SLURM batch  scripts as examples. Currently CUNY-HPCC uses SLURM so users must check the brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=CP2K&amp;diff=144</id>
		<title>CP2K</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=CP2K&amp;diff=144"/>
		<updated>2022-10-27T20:12:13Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;At the CUNY HPC Center CP2K is installed on ANDY.  CP2K can be built as a serial, MPI-parallel, or&lt;br /&gt;
MPI-OpenMP-parallel code.  At this time, only the MPI-parallel version of the application has been built for&lt;br /&gt;
production use at the HPC Center.  Further information on CP2K is available at the website here [http://www.cp2k.org/]. &lt;br /&gt;
&lt;br /&gt;
Below is an example SLURM script that will run the CP2K H2O-32 test case provided with the CP2K distribution.&lt;br /&gt;
It can be copied from the local installation directory to your current location as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cp /share/apps/cp2k/2.3/tests/SE/regtest-2/H2O-32.inp .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To include all required environmental variables and the path to the CP2K executable run the modules load&lt;br /&gt;
command (the modules utility is discussed in detail above).  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cp2k&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the example SLURM script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH -- partition production&lt;br /&gt;
#SBATCH -- job-name CP2K_MPI.test&lt;br /&gt;
#SBATCH -- nodes=8&lt;br /&gt;
#SBATCH -- ntasks&lt;br /&gt;
#SBATCH -- mem=2880&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# Change to working directory&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin CP2K MPI Parallel Run ...&amp;quot;&lt;br /&gt;
mpirun -np 8 cp2k.popt ./H2O-32.inp &amp;gt; H2O-32.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   CP2K MPI Parallel Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This script can be dropped in to a file (say cp2k.job) and started with the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub cp2k.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running the H2O-32 test case should take less than 5 minutes and will produce SLURM output and error files beginning&lt;br /&gt;
with the job name &#039;CP2K_MPI.test&#039;. The CP2K application results will be written into the user-specified file at the end&lt;br /&gt;
of the CP2K command line after the greater-than sign. Here it is named &#039;H2O-32.out&#039;.  The expression &#039;2&amp;gt;&amp;amp;1&#039; combines&lt;br /&gt;
Unix standard output from the program with Unix standard error.  Users should always explicitly specify the name of the&lt;br /&gt;
application&#039;s output file in this way to ensure that it is written directly into the user&#039;s working directory which has much&lt;br /&gt;
more disk space than the SLURM spool directory on /var.&lt;br /&gt;
&lt;br /&gt;
Details on the meaning of the SLURM script are covered above in the SLURM section. The most important lines are the &#039;#SLURM -l select=8:ncpus=1:mem=2880mb&#039;&lt;br /&gt;
and the &#039;#SLURM -l pack=free&#039; lines.  The first instructs SLURM to select 8 resource &#039;chunks&#039; with 1 processor (core) and 2,880 MBs&lt;br /&gt;
of memory in each for the job. The second instructs SLURM to place this job wherever the least used resources are found (freely).&lt;br /&gt;
The master compute node that SLURM finally selects to run your job will be printed in the SLURM output file by the &#039;hostname&#039;&lt;br /&gt;
command.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=OCTOPUS&amp;diff=143</id>
		<title>OCTOPUS</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=OCTOPUS&amp;diff=143"/>
		<updated>2022-10-27T20:11:46Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Selecting the right queue based on system activity will ensure that your job starts as soon as possible. Complete information about the octopus package can be found at its homepage, http://www.tddft.org/programs/octopus.  The on-line user manual is available at http://www.tddft.org/programs/octopus/wiki/index.php/Manual. &lt;br /&gt;
&lt;br /&gt;
The MPI parallel version Octopus has been installed on PENZIAS and ANDY (the older release is also installed on ANDY) with all its associated libraries (metis, netcdf, sparsekit, etsfio, etc.). It was built with an Intel compiled version of the OpenMPI 1.6.4 and has passed all its internal test cases. &lt;br /&gt;
&lt;br /&gt;
A sample Octopus input file (required to have the name &#039;inp&#039;) is provided here:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Sample data file:&lt;br /&gt;
#&lt;br /&gt;
# This is a simple data file. It will complete a gas phase ground-state&lt;br /&gt;
# calculation for a neon atom. Please consult the Octopus manual for a&lt;br /&gt;
# brief explanation of each section and the variables.&lt;br /&gt;
#&lt;br /&gt;
FromScratch = yes&lt;br /&gt;
&lt;br /&gt;
CalculationMode = gs&lt;br /&gt;
&lt;br /&gt;
ParallelizationStrategy = par_domains&lt;br /&gt;
&lt;br /&gt;
Dimensions = 1&lt;br /&gt;
Spacing = 0.2&lt;br /&gt;
Radius = 50.0&lt;br /&gt;
ExtraStates = 1&lt;br /&gt;
&lt;br /&gt;
TheoryLevel = independent_particles&lt;br /&gt;
&lt;br /&gt;
%Species&lt;br /&gt;
  &amp;quot;Neon1D&amp;quot; | 1 | spec_user_defined | 10 | &amp;quot;-10/sqrt(0.25 + x^2)&amp;quot;&lt;br /&gt;
%&lt;br /&gt;
&lt;br /&gt;
%Coordinates&lt;br /&gt;
  &amp;quot;Neon1D&amp;quot; | 0&lt;br /&gt;
%&lt;br /&gt;
&lt;br /&gt;
ConvRelDens = 1e-7&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Octopus offers its users two distinct and combinable strategies to parallelize its runs.  The first and default is to &lt;br /&gt;
parallelize by domain decomposition of the mesh (METIS is used).  In the input deck above, this method is chosen&lt;br /&gt;
explicitly (ParallelizationStrategy = par_domains).  The second is to compute the entire doman on each processor, but&lt;br /&gt;
to do so for some number of distinct temporal states (ParallelizationStrategy = par_states).  Users wishing to control&lt;br /&gt;
the details of Octopus when run in parallel are advised to consult the advanced options section of the manual&lt;br /&gt;
at http://www.tddft.org/programs/octopus/wiki/index.php/Manual:Advanced_ways_of_running_Octopus.&lt;br /&gt;
&lt;br /&gt;
A sample SLURM Pro batch job submission script that will run on PENZIAS for the above input file is show here:&lt;br /&gt;
   &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/csh&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name neon_gstate&lt;br /&gt;
# The next statements select 8 chunks of 1 core and&lt;br /&gt;
# 3840mb of memory each (the pro-rated limit per&lt;br /&gt;
# core on PENZIAS), and allow SLURM to freely place&lt;br /&gt;
# those resource chunks on the least loaded nodes.&lt;br /&gt;
#SBATCH --nodes=8&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=3840&lt;br /&gt;
&lt;br /&gt;
# Check to see if the Octopus module is loaded.&lt;br /&gt;
(which octopus_mpi &amp;gt; /dev/null) &amp;gt;&amp;amp; /dev/null&lt;br /&gt;
if ($status) then&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;Please run: &#039;module load octopus&#039;&amp;quot;&lt;br /&gt;
echo &amp;quot;before submitting this script. Exiting ... &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
exit&lt;br /&gt;
else&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Must explicitly change to your working directory under SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Set up OCTOPUS environment, working, and temporary directory&lt;br /&gt;
&lt;br /&gt;
setenv OCTOPUS_ROOT /share/apps/octopus/default&lt;br /&gt;
&lt;br /&gt;
setenv OCT_WorkDir \&#039;$SLURM_SUBMIT_DIR\&#039;&lt;br /&gt;
&lt;br /&gt;
setenv MY_SCRDIR `whoami;date &#039;+%m.%d.%y_%H:%M:%S&#039;`&lt;br /&gt;
setenv MY_SCRDIR `echo $MY_SCRDIR | sed -e &#039;s; ;_;&#039;`&lt;br /&gt;
&lt;br /&gt;
setenv SCRATCH_DIR  /state/partition1/oct4.1_scr/${MY_SCRDIR}_$$&lt;br /&gt;
mkdir -p $SCRATCH_DIR&lt;br /&gt;
setenv OCT_TmpDir \&#039;/state/partition1/oct4.1_scr/${MY_SCRDIR}_$$\&#039;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;The scratch directory for this run is: $OCT_TmpDir&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Start OCTOPUS job&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin OCTOPUS MPI Parallel Run ...&amp;quot;&lt;br /&gt;
mpirun -np 8 octopus_mpi &amp;gt; neon_gstate.out&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   OCTOPUS MPI Parallel Run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Clean up scratch files by default&lt;br /&gt;
&lt;br /&gt;
/bin/rm -r $SCRATCH_DIR&lt;br /&gt;
&lt;br /&gt;
echo &#039;Your Octopus job is done!&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This script requests 8 resource &#039;chunks&#039; each with 1 processor.  The memory selected on the &#039;-l select&#039; line is sized &lt;br /&gt;
to PENZIAS&#039;s pro-rated maximum memory per core. Please consult the sections on the SLURM Pro Batch scheduling system below&lt;br /&gt;
for information on how to modify this sample deck for different processor counts. The rest of the script describes its&lt;br /&gt;
action in comments.  Before this script will run the user must load the Octopus module with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load Octopus&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which by default default loads Octopus version 4.1.1. This script would need to be modified as follows to run on ANDY:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt; # 3840mb of memory each (the pro-rated limit per&lt;br /&gt;
---&lt;br /&gt;
&amp;gt; # 2880mb of memory each (the pro-rated limit per&lt;br /&gt;
8c8&lt;br /&gt;
&amp;lt; #SBATCH --nodes=8&lt;br /&gt;
  #SBATCH --ntasks=1&lt;br /&gt;
  #SBATCH --mem=3840&lt;br /&gt;
---&lt;br /&gt;
&amp;gt; #SBATCH --nodes=8 &lt;br /&gt;
  #SBATCH --ntasks=1&lt;br /&gt;
  #SBATCH --mem=2880&lt;br /&gt;
41c41&lt;br /&gt;
&amp;lt; setenv SCRATCH_DIR  /state/partition1/oct4.1_scr/${MY_SCRDIR}_$$&lt;br /&gt;
---&lt;br /&gt;
&amp;gt; setenv SCRATCH_DIR  /scratch/&amp;lt;user_id&amp;gt;/octopus/oct4.1_scr/${MY_SCRDIR}_$$&lt;br /&gt;
43c43&lt;br /&gt;
&amp;lt; setenv OCT_TmpDir \&#039;/state/partition1/oct4.1_scr/${MY_SCRDIR}_$$\&#039;&lt;br /&gt;
---&lt;br /&gt;
&amp;gt; setenv OCT_TmpDir \&#039;/scratch/&amp;lt;user_id&amp;gt;/octopus/oct4.1_scr/${MY_SCRDIR}_$$\&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Users should become aware of the scaling properties of their work by taking note of the run times at various processor counts.&lt;br /&gt;
When doubling processor count improves SCF cycle time only by a modest percentage then further increases in processor counts&lt;br /&gt;
should be avoided. The ANDY system has two distinct interconnects. One is a DDR InfiniBand network that delivers 20 Gbits per&lt;br /&gt;
second of performance and the other is a QDR InfiniBand network that delviers 40 Gbits per second. Either will serve Octopus&lt;br /&gt;
users well, but the QDR network should provide somewhat better scaling.  PENZIAS has a still faster FDR InfiniBand network and&lt;br /&gt;
should provide the best scaling. The HPC is interested in the scaling you observe on its systems and reports are welcome.&lt;br /&gt;
&lt;br /&gt;
In the example above, the &#039;production&#039; queue has been requested which works on both ANDY (DDR InfiniBand) and PENZIAS (FDR&lt;br /&gt;
InfiniBand), but by adding a terminating &#039;_qdr&#039; one can select the QDR interconnect on ANDY.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=THRUST&amp;diff=142</id>
		<title>THRUST</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=THRUST&amp;diff=142"/>
		<updated>2022-10-27T20:11:43Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Thrust provides a rich collection of data parallel primitives such as scan, sort, and reduce, which can be combined&lt;br /&gt;
together to implement complex algorithms with concise, readable source code. By describing your computation in&lt;br /&gt;
terms of these high-level abstractions you provide Thrust with the freedom to select the most efficient implementation&lt;br /&gt;
automatically. As a result, Thrust can be utilized in rapid prototyping of CUDA applications, where programmer&lt;br /&gt;
productivity matters most, as well as in production, where robustness and absolute performance are crucial.&lt;br /&gt;
&lt;br /&gt;
More detail on the Thrust library is available here [http://code.google.com/p/thrust/wiki/QuickStartGuide].  There&lt;br /&gt;
are a collection of example codes here [http://code.google.com/p/thrust/downloads/list].  The Thrust Manual is&lt;br /&gt;
available here [http://code.google.com/p/thrust/downloads/detail?name=Thrust%20-%20A%20Productivity-Oriented%20Library%20for%20CUDA.pdf]&lt;br /&gt;
&lt;br /&gt;
Here is a basic C++ example code, which creates and fills a vector on the Host, resizes it, copies it to the&lt;br /&gt;
Device, modifies it there, and prints out the modified values.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;thrust/host_vector.h&amp;gt;&lt;br /&gt;
#include &amp;lt;thrust/device_vector.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(void)&lt;br /&gt;
{&lt;br /&gt;
    // H has storage for 4 integers&lt;br /&gt;
    thrust::host_vector&amp;lt;int&amp;gt; H(4);&lt;br /&gt;
&lt;br /&gt;
    // initialize individual elements&lt;br /&gt;
    H[0] = 14;&lt;br /&gt;
    H[1] = 20;&lt;br /&gt;
    H[2] = 38;&lt;br /&gt;
    H[3] = 46;&lt;br /&gt;
    &lt;br /&gt;
    // H.size() returns the size of vector H&lt;br /&gt;
    std::cout &amp;lt;&amp;lt; &amp;quot;H has size &amp;quot; &amp;lt;&amp;lt; H.size() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
    // print contents of H&lt;br /&gt;
    for(int i = 0; i &amp;lt; H.size(); i++)&lt;br /&gt;
        std::cout &amp;lt;&amp;lt; &amp;quot;H[&amp;quot; &amp;lt;&amp;lt; i &amp;lt;&amp;lt; &amp;quot;] = &amp;quot; &amp;lt;&amp;lt; H[i] &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
    // resize H&lt;br /&gt;
    H.resize(2);&lt;br /&gt;
    &lt;br /&gt;
    std::cout &amp;lt;&amp;lt; &amp;quot;H now has size &amp;quot; &amp;lt;&amp;lt; H.size() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
    // Copy host_vector H to device_vector D&lt;br /&gt;
    thrust::device_vector&amp;lt;int&amp;gt; D = H;&lt;br /&gt;
    &lt;br /&gt;
    // elements of D can be modified&lt;br /&gt;
    D[0] = 99;&lt;br /&gt;
    D[1] = 88;&lt;br /&gt;
    &lt;br /&gt;
    // print contents of D&lt;br /&gt;
    for(int i = 0; i &amp;lt; D.size(); i++)&lt;br /&gt;
        std::cout &amp;lt;&amp;lt; &amp;quot;D[&amp;quot; &amp;lt;&amp;lt; i &amp;lt;&amp;lt; &amp;quot;] = &amp;quot; &amp;lt;&amp;lt; D[i] &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
    // H and D are automatically deleted when the function returns&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assuming this source file were called &#039;vectcopy.cu&#039;, it can be compiled on PENZIAS:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nvcc -o vectcopy.exe vectcopy.cu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once compiled, the &#039;vectorcopy.exe&#039; executable can be run using the following SLURM script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production_gpu&lt;br /&gt;
#SLURM -N THRUST_vcopy&lt;br /&gt;
#SLURM -l select=1:ncpus=1:ngpus=1 &lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out which compute node the job is using&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo -n &amp;quot;Running job on compute node ... &amp;quot; &lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;SLURM node file is located here ... &amp;quot;  $SLURM_NODEFILE&lt;br /&gt;
echo -n &amp;quot;SLURM node file contains ... &amp;quot;&lt;br /&gt;
cat  $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Change to working directory&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
# Running executable on a single, gpu-enabled&lt;br /&gt;
# compute node using 1 CPU and 1 GPU.&lt;br /&gt;
echo &amp;quot;CUDA job is starting ... &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
./vectcopy.exe&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;CUDA job is done!&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=PHOENICS&amp;diff=141</id>
		<title>PHOENICS</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=PHOENICS&amp;diff=141"/>
		<updated>2022-10-27T20:11:18Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;As suggested, the entire PHOENICS 2011 package is installed on ANDY and users can run the X11 version of the PHOENICS Commander&lt;br /&gt;
display tool from ANDY&#039;s head node if they have connected using &#039;ssh -X andy.csi.cuny.edu&#039; where the &#039;-X&#039; option ensures that&lt;br /&gt;
X11 images are passed back to the original client.  Doing this from outside the College of Staten Island campus where the CUNY&lt;br /&gt;
HPC Center is located may produce poor results because the X11 traffic will have to be forwarded through the HPC Center gateway&lt;br /&gt;
system.  CUNY has also licensed a number of seats for office-local desktop installations of PHOENICS (for either Windows or Linux)&lt;br /&gt;
so that this should not be necessary.  Job preparation and post-processing work is generally most efficiently accomplished on the&lt;br /&gt;
local desktop using the Windows version of PHOENICS VR, which can be run directly or from PHOENICS Commander.&lt;br /&gt;
&lt;br /&gt;
A rough general outline of the PHOENICS work cycle is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1.  The user runs VR Editor (preprocessor) on their workstation (or on ANDY) and&lt;br /&gt;
    perhaps selects a library case (e.g. 274) making changes to this case to match&lt;br /&gt;
    his/her specific requirements.&lt;br /&gt;
 &lt;br /&gt;
2.  The user leaves the VR editor where input files &#039;q1&#039; and &#039;eardat&#039; are created.  &lt;br /&gt;
    If the user is preprocessing from their desktop, these files would then be &lt;br /&gt;
    transferred to ANDY using the &#039;scp&#039; command or via the &#039;PuTTy&#039; utility for &lt;br /&gt;
    Windows.&lt;br /&gt;
 &lt;br /&gt;
3.  The user runs the solver on ANDY (typically the parallel version, &#039;parexe&#039;) from&lt;br /&gt;
    their working directory using the SLURM batch submit script presented below.  This&lt;br /&gt;
    script reads the files &#039;q1&#039; and &#039;eardat&#039; (and potentially some other input files)&lt;br /&gt;
    and writes the key output files &#039;phi&#039; and &#039;result&#039;. &lt;br /&gt;
 &lt;br /&gt;
4.  The user copies these output files back to their desktop (or not) and runs VR&lt;br /&gt;
    Viewer (postprocessor) which reads the graphics output file &#039;phi&#039;, or the user&lt;br /&gt;
    views tabular results manually in the &#039;result&#039; file.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
POLIS, available in Linux and Windows, has further useful information on running PHOENICS&lt;br /&gt;
including tutorials, viewing documentation, and on all PHOENICS commands and topics &lt;br /&gt;
here [http://www.cham.co.uk/phoenics/d_polis/polis.htm]. Graphical monitoring&lt;br /&gt;
should be deactivated during parallel runs in ANDY&#039;s batch queue. To do this users should place&lt;br /&gt;
two leading spaces in front of the command TSTSWP in the &#039;q1&#039; file. The TSTSWP command is&lt;br /&gt;
present in most library cases, including case 274 which is a useful test case.  Graphical monitoring&lt;br /&gt;
can be left turned on when running sequential &#039;earexe&#039; on the desktop. This gives useful realtime&lt;br /&gt;
information on sweeps, values, and the convergence progress. &lt;br /&gt;
&lt;br /&gt;
Details on the use of the display and non-parallel PHOENICS tools can be found at the CHAM website&lt;br /&gt;
and in the CHAM Encyclopaedia here [http://www.cham.co.uk/phoenics/d_polis/polis.htm].&lt;br /&gt;
&lt;br /&gt;
The process of setting up a PHOENICS working directory and running the parallel version of &#039;earth&#039;&lt;br /&gt;
(parexe) on ANDY is described below.  As a first step, users would typically create a directory&lt;br /&gt;
called &#039;phoenics&#039; in their $HOME directory as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd; mkdir phoenics&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next, the default PHOENICS installation root directory (version 2011 is the current default) named&lt;br /&gt;
above should be symbolically linked to the &#039;lp36&#039; subdirectory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd phoenics&lt;br /&gt;
ln -s /share/apps/phoenics/default ./lp36&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The user must then generate the required input files for the &#039;earth&#039; module which, as mentioned above&lt;br /&gt;
in the PHOENICS work cycle section, are the &#039;q1&#039; and &#039;eardat&#039; files created by the VR Editor.  These can&lt;br /&gt;
be generated on ANDY, but it is generally easier to do this from the user&#039;s desktop installation of PHOENICS. &lt;br /&gt;
&lt;br /&gt;
Because the current default version of PHOENICS, version 2011, was built with an earlier, older and now&lt;br /&gt;
no longer default version of MPI, users must use the modules command to unload the current defaults and&lt;br /&gt;
load the previous set before submitting the PHOENICS SLURM script below.  This is a fairly simple procedure:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$module list&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) SLURM/11.3.0.121723     2) cuda/5.0              3) intel/13.0.1.117      4) openmpi/1.6.3_intel&lt;br /&gt;
$&lt;br /&gt;
$module unload intel/13.0.1.117&lt;br /&gt;
$module unload openmpi/1.6.3_intel&lt;br /&gt;
$&lt;br /&gt;
$module load intel/12.1.3.293&lt;br /&gt;
$&lt;br /&gt;
$module load openmpi/1.5.5_intel&lt;br /&gt;
Note: Intel compilers will be set to version 12.1.3.293&lt;br /&gt;
$&lt;br /&gt;
$module list&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) SLURM/11.3.0.121723     2) cuda/5.0              3) intel/12.1.3.293      4) openmpi/1.5.5_intel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the input files are created and placed (transferred to) into the working directory and the older modules&lt;br /&gt;
have been loaded on ANDY, the following SLURM Pro batch script can be used run the job on ANDY.  The progress&lt;br /&gt;
of the job can be tracked with the SLURM &#039;qstat&#039; command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production_gdr&lt;br /&gt;
#SBATCH --job-name phx_test&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# Take a look at the set of compute nodes that SLURM gave you&lt;br /&gt;
echo $SLURM_NODEFILE&lt;br /&gt;
cat  $SLURM_NODEFILE&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Just point to the parallel executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin PHOENICS MPI Parallel Run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;mpirun -np 8 -machinefile $SLURM_NODEFILE ./lp36/d_earth/parexe&amp;quot;&lt;br /&gt;
mpirun -np 8 ./lp36/d_earth/parexe&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   PHOENICS MPI Parallel Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The job can be submitted with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub 8Proc.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Constructing a SLURM batch script is described in detail elsewhere in this Wiki document,&lt;br /&gt;
but in short this script requests the QDR Infiniband production queue (&#039;production_qdr&#039;)&lt;br /&gt;
which runs the job on the side of ANDY with the fastest interconnect.  It asks for 8 processors&lt;br /&gt;
(cores) each with 2880 Mbytes of memory and allows SLURM to select those processors based on &lt;br /&gt;
least loaded criteria.  Because this is just an 8 processor job, it could be packed onto a&lt;br /&gt;
single physical node on ANDY for better scaling using &#039;-l place=pack&#039; but this would&lt;br /&gt;
delay its start by SLURM as SLURM would have to locate a completely free node.&lt;br /&gt;
&lt;br /&gt;
During the run, &#039;parexe&#039; creates (N-1) directories (named Proc00#) where N is the number of&lt;br /&gt;
processors requested (note: if the Proc00# directories do not exist already they will be created, &lt;br /&gt;
but there will be an error message in the SLURM error log, which can be ignored).  The output &lt;br /&gt;
from process zero is written into the working directory from which the script was submitted.&lt;br /&gt;
The output from each of the other MPI processes is written into its associated &#039;Proc00#&#039; directory.&lt;br /&gt;
Upon successful completion, the &#039;result&#039; file should show that the requested number of iterations&lt;br /&gt;
(sweeps) was completed and print the starting and ending wall-clock time.  At this point, the&lt;br /&gt;
results (the &#039;phi&#039; and &#039;results&#039; files) from the SLURM parallel job can be copied back to the users &lt;br /&gt;
desk top for post processing.&lt;br /&gt;
&lt;br /&gt;
NOTE:  A bug is present in the non-graphical, batch version of PHOENICS that is used on&lt;br /&gt;
the CUNY HPC Clusters.  This problem does not occur in Windows runs. To avoid the problem&lt;br /&gt;
a go-around modification to the &#039;q1&#039; input file is required. The problem occurs only in jobs&lt;br /&gt;
that require SWEEP counts that are greater than 10,000 (i.e. SWEEP=20000).  Users requesting&lt;br /&gt;
larger SWEEP counts must include the following in their &#039;q1&#039; input files to avoid having their&lt;br /&gt;
jobs terminated at 10,000 SWEEPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
USTEER=F&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This addition forces a bypass of the graphical IO monitoring capability in PHOENICS and prevents&lt;br /&gt;
that section of code from capping the SWEEP count at 10,000 SWEEPs.&lt;br /&gt;
&lt;br /&gt;
Finally, PHOENICS has been licensed broadly by the CUNY HPC Center, and it can provide activation&lt;br /&gt;
keys for any desktop copies whose annual activation keys expire.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Q/A&amp;diff=140</id>
		<title>Q/A</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Q/A&amp;diff=140"/>
		<updated>2022-10-27T19:55:20Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Q1.Dear All,&lt;br /&gt;
&lt;br /&gt;
I am trying to run my job but Penzias doesn&#039;t accept it. I need to try a 16 nodes memory.&lt;br /&gt;
Let me know, please what is my mistake. Below you have my job scripts:&lt;br /&gt;
&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N test_nwchem&lt;br /&gt;
#SLURM -l select=2:ncpus=16:mem=23040mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
A1. There are 2 mistakes here. From SLURM point of view there is no nodes with 16 cores per node. The SLURM &amp;quot;thinks&amp;quot; that all nodes have 8 cores. Second the maximum available memory per core is 3686 mb.&lt;br /&gt;
&lt;br /&gt;
Q2. I am trying to submit a job but que production_qdr does not exist. &lt;br /&gt;
&lt;br /&gt;
A2. On penzias there is no que name production_qdr. On PENZIAS the main que is called production. This was marked in text and in example script on NWChem wiki.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Program_Compilation&amp;diff=139</id>
		<title>Program Compilation</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Program_Compilation&amp;diff=139"/>
		<updated>2022-10-27T19:55:06Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Compilation and Job Submission =&lt;br /&gt;
&lt;br /&gt;
== [[Serial Program Compilation]] ==&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports four different compiler suites at this time; those from Cray, Intel, The Portland Group, and GNU.&lt;br /&gt;
Basic serial programs in C, C++, and Fortran can be compiled with any of these offerings, although the Cray compilers are available&lt;br /&gt;
only on SALK. Man pages (e.g. for Cray, man cc, for Intel, man icc; for PGI, man pgcc; for GNU, man gcc) and manuals exist for each&lt;br /&gt;
compiler in each suite and provide details on specific compiler flags.  Optimized performance on a particular system with a particular&lt;br /&gt;
compiler often depends on the compiler options chosen.  Identical flags are accepted by the MPI-wrapped versions of each compiler&lt;br /&gt;
(mpicc, mpif90, etc. [NOTE: SALK does not use mpi-prefixed MPI compile and run tools; it has its own]). Program debuggers and&lt;br /&gt;
performance profilers are also part of each of these suites.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;The Intel Compiler Suite&#039;&#039;&#039;&lt;br /&gt;
:Intel&#039;s Cluster Studio (ICS) compilers, debuggers, profilers, and libraries are available on all HPC Center cluster systems, including the Cray system, SALK. &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;The Portland Group Compiler Suite &#039;&#039;&#039;&lt;br /&gt;
:The Portland Group Inc. (PGI) compilers, debuggers, profilers, and libraries are available on all HPC Center cluster systems including the Cray system, SALK. &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;The Cray Compiler Suite &#039;&#039;&#039;&lt;br /&gt;
:The HCP Center&#039;s Cray XE6m system, SALK, includes the Cray Compiler Environment (CCE) provided by Cray along with the others described here.  &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;The GNU Compiler Suite &#039;&#039;&#039;&lt;br /&gt;
:The GNU compilers, debuggers, profilers, and libraries are available on all HPC Center cluster systems although unlike the other compilers mention, the default and mix of installed versions may not be the same on each system.  This is because the HPC Center runs different version of Linux (SUSE and CentOS) at different release levels.&lt;br /&gt;
&lt;br /&gt;
=== The Intel Compiler Suite ===&lt;br /&gt;
Intel&#039;s Cluster Studio (ICS) compilers, debuggers, profilers, and libraries are available on all HPC Center cluster systems,&lt;br /&gt;
including the Cray system, SALK. &lt;br /&gt;
&lt;br /&gt;
To check for the default version installed on systems other than SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc  -V&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compiling a [[serial]] C program on systems other than SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc  -O3 -unroll mycode.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The line above invokes Intel&#039;s C compiler (also used by the default OpenMPI &#039;mpicc&#039; wrapper for icc).  It requests&lt;br /&gt;
level 3 optimization and asks that loops be unrolled for performance.  To find out more about &#039;icc&#039;, type &#039;man icc&#039;.  &lt;br /&gt;
&lt;br /&gt;
Similarly for Intel Fortran and C++.&lt;br /&gt;
&lt;br /&gt;
Compiling a [[serial]] Fortran program on systems other than SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ifort -O3 -unroll mycode.f90&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compiling a [[serial]] C++ program on systems other than SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icpc -O3 -unroll mycode.C&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On SALK, Cray&#039;s generic wrappers (cc, CC, ftn) are used for each compiler suite (Intel, PGI, Cray, GNU). To&lt;br /&gt;
map SALK&#039;s Cray wrappers to the Intel compiler suite, users must unload the default Cray compiler modules&lt;br /&gt;
and load the Intel compiler modules, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module unload cce&lt;br /&gt;
module unload PrgEnv-cray&lt;br /&gt;
module load PrgEnv-intel&lt;br /&gt;
module load intel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This completes the following mappings and also sets the Cray environment to link to Cray&#039;s custom interconnect&lt;br /&gt;
linked version of MPICH2, as well as other Intel-specific Cray library builds.  Once the Intel modules are loaded, you&lt;br /&gt;
may compile [[either]] serial or MPI parallel programs on SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cc      ==&amp;gt;   icc&lt;br /&gt;
CC     ==&amp;gt;   icpc&lt;br /&gt;
ftn     ==&amp;gt;   ifort&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the mapping is complete the mapped commands listed above will invoke the corresponding Intel&lt;br /&gt;
compiler and recognize that compiler&#039;s Intel options.  NOTE:  Using the Intel compiler names directly on SALK&lt;br /&gt;
will likely cause a problem as the Cray specific libraries (such as the Cray version of MPI) will not be included&lt;br /&gt;
in the link phase, unless the intention is to run the executable only on the Cray login node.&lt;br /&gt;
&lt;br /&gt;
So to compile a [[serial]] (or MPI parallel) C program on SALK after loading the Intel modules:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cc -O3 -unroll mycode.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Doing the same on SALK for Intel Fortran and C++ programs is left as an exercise for the reader.&lt;br /&gt;
&lt;br /&gt;
=== The Portland Group Compiler Suite ===&lt;br /&gt;
The Portland Group Inc. (PGI) compilers, debuggers, profilers, and libraries are available on all HPC Center cluster systems&lt;br /&gt;
including the Cray system, SALK. &lt;br /&gt;
&lt;br /&gt;
To check for the default version installed on systems other than SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pgcc  -V&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compiling a [[serial]] C program on systems other than SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pgcc  -O3 -Munroll mycode.c&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The line above invokes PGI&#039;s C compiler (also used by the PGI OpenMPI &#039;mpicc&#039; wrapper for pgcc).  It requests&lt;br /&gt;
level 3 optimization and asks that loops be unrolled for performance.  To find out more about &#039;pgcc&#039;, type &#039;man pgcc&#039;.  &lt;br /&gt;
&lt;br /&gt;
Similarly for PGI Fortran and C++.&lt;br /&gt;
&lt;br /&gt;
Compiling a [[serial]] Fortran program on systems other than SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 pgf95 -O3 -Munroll mycode.f&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compiling a [[serial]] C++ program on systems other than SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 pgCC -O3 -Munroll  mycode.C&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On SALK, Cray&#039;s generic wrappers (cc, CC, ftn) are used for each compiler suite (Intel, PGI, Cray, GNU). To&lt;br /&gt;
map SALK&#039;s Cray wrappers to the PGI compiler suite, users must unload the default Cray compiler modules&lt;br /&gt;
and load the PGI compiler modules, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module unload cce&lt;br /&gt;
module unload PrgEnv-cray&lt;br /&gt;
module load PrgEnv-pgi&lt;br /&gt;
module load pgi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This completes the following mappings and also sets the environment to link to Cray&#039;s custom interconnect&lt;br /&gt;
linked version of MPICH2, as well as other PGI-specific Cray library builds. Once the PGI modules are loaded, you&lt;br /&gt;
may compile [[either]] serial or MPI parallel programs on SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cc      ==&amp;gt;   pgcc&lt;br /&gt;
CC     ==&amp;gt;   pgCC&lt;br /&gt;
ftn     ==&amp;gt;   pgf95&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the mapping is complete the mapped commands listed above will invoke the corresponding PGI&lt;br /&gt;
compiler and recognize that compiler&#039;s PGI options.  NOTE:  Using the PGI names directly on SALK will&lt;br /&gt;
likely cause problem as the Cray specific libraries (such as the Cray version of MPI) will not be included&lt;br /&gt;
in the link phase, unless the intention is to run the executable only on the Cray login node.&lt;br /&gt;
&lt;br /&gt;
So to compile a [[serial]] (or MPI parallel) C program on SALK after loading the PGI modules:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cc -O3 -unroll mycode.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Doing the same on SALK for PGI Fortran and C++ programs is left as an exercise for the reader.&lt;br /&gt;
&lt;br /&gt;
=== The Cray Compiler Suite ===&lt;br /&gt;
The HCP Center&#039;s Cray XE6m system, SALK, includes the Cray Compiler Environment (CCE) provided by Cray&lt;br /&gt;
along with the others described here.  Cray systems use the &#039;modules&#039; utility to select a default compiler&lt;br /&gt;
environment and map Cray&#039;s generic wrappers (cc, CC, ftn) to the compiler suite selected.  More detail is&lt;br /&gt;
provided on &#039;modules&#039; later and is available with &#039;man module.&#039;&lt;br /&gt;
&lt;br /&gt;
Here we show you how to use modules to select Cray&#039;s programming environment which includes the Cray&lt;br /&gt;
Compiler Environment (CCE), although is typically not necessary as the Cray programming environment is the&lt;br /&gt;
default on SALK.  &lt;br /&gt;
&lt;br /&gt;
Load the Cray programming environment (available on SALK only):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load PrgEnv-cray&lt;br /&gt;
module load cce&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If another compiler suite mentioned elsewhere here has already been loaded you must unload it prior to loading&lt;br /&gt;
the Cray compiler suite.&lt;br /&gt;
&lt;br /&gt;
To check for the default version installed on SALK only:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
richard.walsh@salk:~&amp;gt; cc -V&lt;br /&gt;
/opt/cray/xt-asyncpe/5.13/bin/cc: INFO: Compiling with CRAYPE_COMPILE_TARGET=native.&lt;br /&gt;
Cray C : Version 8.0.7  Fri Oct 05, 2012  15:34:53&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compiling a C (serial or MPI enabled) program on SALK only:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cc  -O3 -hunroll2 mycode.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The line above invokes Cray&#039;s C compiler if the Cray modules have been load.  It requests level 3 optimization and that all loops be&lt;br /&gt;
unrolled for performance.  To find out more about the Cray C compiler type &#039;man craycc&#039;. &lt;br /&gt;
&lt;br /&gt;
Similarly for Fortran and C++.&lt;br /&gt;
&lt;br /&gt;
Compiling a Fortran (serial or MPI parallel) program on SALK only:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ftn -O3 -O unroll2 mycode.f90&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compiling a C++ (serial or MPI parallel) program on SALK only:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
CC -O3 -hunroll2 mycode.C&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To find out more about the Cray Fortran and C++ compilers type &#039;man crayftn&#039; or &#039;man crayCC&#039;.&lt;br /&gt;
&lt;br /&gt;
NOTE: On SALK (Cray XE6m) whatever compiler suite is selected using the &#039;module load&#039; command&lt;br /&gt;
shown above becomes the default and the generic command names &#039;cc&#039;, &#039;ftn&#039;, and &#039;CC&#039;  shown&lt;br /&gt;
above are symbolically associated with the underlying compiler suite-specific names of that loaded&lt;br /&gt;
suite (Cray, or PGI, Intel, GNU).  The man pages for these generic names (&#039;cc&#039;, &#039;ftn&#039;, &#039;CC&#039;) provide&lt;br /&gt;
direction as to what the suite-specific names and man pages are.  The suite specific names are&lt;br /&gt;
also listed in the sections above and below.&lt;br /&gt;
&lt;br /&gt;
=== The GNU Compiler Suite ===&lt;br /&gt;
The GNU compilers, debuggers, profilers, and libraries are available on all HPC Center cluster systems&lt;br /&gt;
although unlike the other compilers mention, the default and mix of installed versions may not be the&lt;br /&gt;
same on each system.  This is because the HPC Center runs different version of Linux (SUSE and CentOS)&lt;br /&gt;
at different release levels. &lt;br /&gt;
&lt;br /&gt;
To check for the default version installed:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gcc  -v&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compiling a [[serial]] C program on systems other than SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gcc  -O3 -funroll-loops mycode.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The line above invokes GNU&#039;s C compiler (also used by GNU mpicc).  It requests level 3 optimization and that loops be&lt;br /&gt;
unrolled for performance.  To find out more about &#039;gcc&#039;, type &#039;man gcc&#039;.&lt;br /&gt;
&lt;br /&gt;
Similarly for Fortran and C++.&lt;br /&gt;
&lt;br /&gt;
Compiling a [[serial]] Fortran program on systems other than SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gfortran -O3 -funroll-loops mycode.f90&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compiling a [[serial]] C++ program (uses gcc) on systems other than SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gcc -O3 -funroll-loops mycode.C&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On SALK, Cray&#039;s generic wrappers (cc, CC, ftn) are used for each compiler suite (Intel, PGI, Cray, GNU). To&lt;br /&gt;
map SALK&#039;s Cray wrappers to the GNU compiler suite, users must unload the default Cray compiler modules&lt;br /&gt;
and load the GNU compiler modules, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module unload PrgEnv-intel&lt;br /&gt;
module load PrgEnv-gnu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This completes the following mappings and also sets the environment to link to Cray&#039;s custom interconnect&lt;br /&gt;
linked version of MPICH2, as well as other GNU-specific Cray library builds. Once the GNU modules are loaded, you&lt;br /&gt;
may compile [[either]] serial or MPI parallel programs on SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cc      ==&amp;gt;   gcc&lt;br /&gt;
CC     ==&amp;gt;   g++&lt;br /&gt;
ftn     ==&amp;gt;   gfortran&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the mapping is complete the mapped commands listed above will invoke the corresponding GNU&lt;br /&gt;
compiler and recognize that compiler&#039;s GNU options.  NOTE:  Using the GNU names directly on SALK will&lt;br /&gt;
likely cause a problem as the Cray specific libraries (such as the Cray version of MPI) will not be included&lt;br /&gt;
in the link phase, unless the intention is to run the executable only on the Cray login node.&lt;br /&gt;
&lt;br /&gt;
So to compile a [[serial]] (or MPI parallel) C program on SALK after loading the GNU modules:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cc -O3 -funroll-loops mycode.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Doing the same on SALK for GNU Fortran and C++ programs is left as an exercise for the reader.&lt;br /&gt;
&lt;br /&gt;
== [[OpenMP, OpenMP SMP-Parallel Program Compilation, and SLURM Job Submission]] ==&lt;br /&gt;
All the compute nodes on all the the systems at the CUNY HPC Center include at least 2 sockets and multiple&lt;br /&gt;
cores.  Some have 8 cores (ZEUS, ANDY), and some have 16 (SALK and PENZIAS).  These multicore, SMP compute nodes offer&lt;br /&gt;
the CUNY HPC Center user community the option of creating parallel programs using the OpenMP Symmetric&lt;br /&gt;
Multi-Processing (SMP) parallel programming model.  The SMP parallel programming with the OpenMP model&lt;br /&gt;
(and other SMP models) is the original parallel processing model because the earliest parallel HPC systems were&lt;br /&gt;
built only with shared memories.  The Cray-XMP (circa 1982) was among the first systems in this class.  Shared memory,&lt;br /&gt;
multi-socket and multi-core designs are now typical of even today&#039;s desktop and portable PC and Mac systems.  &lt;br /&gt;
As the CUNY HPC Center systems, each compute node is similarly a shared-memory, symmetric multi-processing&lt;br /&gt;
system that can compute in parallel using the OpenMP shared-memory model.&lt;br /&gt;
&lt;br /&gt;
In the SMP model, multiple processors work simultaneously within a single program&#039;s memory space (image).&lt;br /&gt;
This eliminates the need to copy data from one program (process) image to another (required by MPI) and&lt;br /&gt;
simplifies the parallel run-time environment significantly.  As such, writing parallel programs to the OpenMP&lt;br /&gt;
standard is generally easier and requires many fewer lines of code.  However, the size of the problem that can&lt;br /&gt;
be addressed using OpenMP is limited by the amount of memory on a single compute node, and the similarly&lt;br /&gt;
the parallel performance improvement to be gained is limited by the number of processors (cores) within that&lt;br /&gt;
single node.&lt;br /&gt;
&lt;br /&gt;
As of Q4 2012 at CUNY&#039;s HPC Center, OpenMP applications can run with a maximum of 16 cores (this is on&lt;br /&gt;
SALK, the Cray XE6m system).  Most of the HPC Center&#039;s other systems are limited to 8 core OpenMP parallelism.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039; Compiling OpenMP Programs Using the Intel Compiler Suite &#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Compiling OpenMP Programs Using the PGI Compiler Suite &#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Compiling OpenMP Programs Using the GNU Compiler Suite &#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Submitting an OpenMP Program to the SLURM Batch Queueing System &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here, a simple OpenMP parallel version of the standard C &amp;quot;Hello, World!&amp;quot; program is set to run on 8 cores:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;omp.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#define NPROCS 8&lt;br /&gt;
&lt;br /&gt;
int main (int argc, char *argv[]) {&lt;br /&gt;
&lt;br /&gt;
   int nthreads, num_threads=NPROCS, tid;&lt;br /&gt;
&lt;br /&gt;
  /* Set the number of threads */&lt;br /&gt;
  omp_set_num_threads(num_threads);&lt;br /&gt;
&lt;br /&gt;
  /* Fork a team of threads giving them their own copies of variables */&lt;br /&gt;
#pragma omp parallel private(nthreads, tid)&lt;br /&gt;
  {&lt;br /&gt;
&lt;br /&gt;
  /* Each thread obtains its thread number */&lt;br /&gt;
  tid = omp_get_thread_num();&lt;br /&gt;
&lt;br /&gt;
  /* Each thread executes this print */&lt;br /&gt;
  printf(&amp;quot;Hello World from thread = %d\n&amp;quot;, tid);&lt;br /&gt;
&lt;br /&gt;
  /* Only the master thread does this */&lt;br /&gt;
  if (tid == 0)&lt;br /&gt;
     {&lt;br /&gt;
      nthreads = omp_get_num_threads();&lt;br /&gt;
      printf(&amp;quot;Total number of threads = %d\n&amp;quot;, nthreads);&lt;br /&gt;
     }&lt;br /&gt;
&lt;br /&gt;
   }  /* All threads join master thread and disband */&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An excellent and comprehensive tutorial on OpenMP with examples can be found at the &lt;br /&gt;
Lawrence Livermore National Lab web site: (https://computing.llnl.gov/tutorials/openMP)&lt;br /&gt;
&lt;br /&gt;
=== Compiling OpenMP Programs Using the Intel Compiler Suite ===&lt;br /&gt;
&lt;br /&gt;
The intel C compiler requires the &#039;-openmp&#039; option, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc  -o hello_omp.exe -openmp hello_omp.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When run, the program above produces the following output:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ./hello_omp.exe&lt;br /&gt;
Hello World from thread = 0&lt;br /&gt;
Number of threads = 8&lt;br /&gt;
Hello World from thread = 1&lt;br /&gt;
Hello World from thread = 2 &lt;br /&gt;
Hello World from thread = 6&lt;br /&gt;
Hello World from thread = 4&lt;br /&gt;
Hello World from thread = 3&lt;br /&gt;
Hello World from thread = 5&lt;br /&gt;
Hello World from thread = 7&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OpenMP is supported in Intel&#039;s C, C++, and Fortran compilers; as such, a Fortran version of&lt;br /&gt;
the program above could be used to produce similar results.  An important feature of OpenMP&lt;br /&gt;
threads is that they are logical entities that are not by default locked to physical processors.  The code&lt;br /&gt;
above requesting 8 threads would run and produce similar results on a compute node with only&lt;br /&gt;
2 or 4 processors, or even 1 processor.  In these cases, they would simply take more wall-clock&lt;br /&gt;
time to complete.&lt;br /&gt;
&lt;br /&gt;
When threads in excess of the physical number of processors present on the motherboad are requested&lt;br /&gt;
they simply compete for access to actual number of physical cores available.  Under such circumstaces,&lt;br /&gt;
maximum program speed ups are limited to the number of unshared physical processors (cores) available to&lt;br /&gt;
the OpenMP job less the overhead required to start OpenMP (this ignores Intel&#039;s &#039;hyperthreading&#039; which allows&lt;br /&gt;
two threads to share sub-resources not in simultaneous use within a single processor).&lt;br /&gt;
&lt;br /&gt;
=== Compiling OpenMP Programs Using the PGI Compiler Suite ===&lt;br /&gt;
&lt;br /&gt;
The PGI C compiler requires its &#039;-mp&#039; option for OpenMP programs, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pgcc  -o hello_omp.exe -mp hello_omp.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When run this PGI executable will produce the &#039;same&#039; output as shown above, although the order of the print&lt;br /&gt;
statements cannot be predicted and will not necessarily be the same over repeated runs.&lt;br /&gt;
&lt;br /&gt;
OpenMP is supported in PGI&#039;s C, C++, and Fortran compilers; therefore a Fortran version &lt;br /&gt;
of the program above could be used to produce similar results.&lt;br /&gt;
&lt;br /&gt;
=== Compiling OpenMP Programs Using the Cray Compiler Suite ===&lt;br /&gt;
&lt;br /&gt;
The Cray C compiler requires its &#039;-h omp&#039; option for OpenMP programs, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cc  -o hello_omp.exe -h omp hello_omp.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The program produces the same output, and again the order of the print statements&lt;br /&gt;
cannot be predicted and will not necessarily be the same over repeated runs.&lt;br /&gt;
&lt;br /&gt;
OpenMP is supported in Cray&#039;s C, C++, and Fortran compilers; therefore a Fortran version &lt;br /&gt;
of the program above could be used to produce similar results.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039;  As discussed above in the section on serial program compilation, on the Cray the &#039;cc&#039;,&lt;br /&gt;
or &#039;ftn&#039; or &#039;CC&#039; compiler wrappers would end up being used (with their compiler-specific OpenMP&lt;br /&gt;
flags) for each specific compiler suite after the appropriate programming environment module&lt;br /&gt;
was loaded.&lt;br /&gt;
&lt;br /&gt;
=== Compiling OpenMP Programs Using the GNU Compiler Suite ===&lt;br /&gt;
&lt;br /&gt;
The GNU C compiler requires its &#039;-fopenmp&#039; option for OpenMP programs, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gcc  -o hello_omp.exe -fopenmp hello_omp.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The program produces the same output, and again the order of the print statements&lt;br /&gt;
cannot be predicted and will not necessarily be the same over repeated runs.&lt;br /&gt;
&lt;br /&gt;
OpenMP is supported in both GNU&#039;s C, C++, and Fortran compilers; therefore a Fortran&lt;br /&gt;
version of the program above could be used to produce similar results.&lt;br /&gt;
&lt;br /&gt;
=== Submitting an OpenMP Program to the SLURM Batch Queueing System ===&lt;br /&gt;
&lt;br /&gt;
All non-trivial jobs (development or production, parallel or serial) must be&lt;br /&gt;
submitted to HPC Center system &#039;&#039;compute nodes&#039;&#039; from each system&#039;s &#039;&#039;head&#039;&#039;&lt;br /&gt;
or &#039;&#039;login node&#039;&#039; using a SLURM script.  Jobs run interactively on system head&lt;br /&gt;
nodes that place a significant and sustained load on the head node will&lt;br /&gt;
be terminated.  Details on the use of SLURM are presented later in this document;&lt;br /&gt;
however, here we present a basic SLURM script (&#039;my_ompjob&#039;) that can be&lt;br /&gt;
used to submit any OpenMP SMP program for batch processing on one of&lt;br /&gt;
the CUNY HPC Center compute nodes. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N openMP_job&lt;br /&gt;
#SLURM -l select=1:ncpus=8&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to your working directory in SLURM&lt;br /&gt;
# The SLURM_O_WORDIR variable is automatically filled with the path &lt;br /&gt;
# to the directory you submit your job from&lt;br /&gt;
&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# The SLURM_NODEFILE file contains the compute nodes assigned&lt;br /&gt;
# to the job by SLURM.  Uncommenting the next line will show them.&lt;br /&gt;
&lt;br /&gt;
cat $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# It is possible to set the number of threads to be used in&lt;br /&gt;
# an OpenMP program using the environment variable OMP_NUM_THREADS.&lt;br /&gt;
# This setting is not used here because the number of threads (8)&lt;br /&gt;
# was fixed inside the program itself in our example code.&lt;br /&gt;
# export OMP_NUM_THREADS=8&lt;br /&gt;
&lt;br /&gt;
./hello_omp.exe&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When submitted with &#039;qsub my_ompjob&#039; a job ID XXXX is returned and the output &lt;br /&gt;
will be written to the file &#039;openMP_job.oXXXX&#039; where XXXX is the job ID, unless&lt;br /&gt;
otherwise redirected on the command-line.&lt;br /&gt;
&lt;br /&gt;
The key lines in the script are &#039;-l select&#039; and &#039;-l place&#039;.  The first defines (1) resource&lt;br /&gt;
chunk with &#039;-l select=1&#039; and assigns (8) cores to it with &#039;:ncpus=8&#039;.  SLURM must allocate&lt;br /&gt;
these (8) cores on a single node because they are all part of a single SLURM resource &#039;chunk&#039;&lt;br /&gt;
(&#039;chunks&#039; are atomic) to be used in concert by our OpenMP executable, hello_omp.exe.&lt;br /&gt;
&lt;br /&gt;
Next, the line &#039;-l place=free&#039; instructs SLURM to place this chunk anywhere it can find 8 free&lt;br /&gt;
cores.  As mentioned, SLURM resource &#039;chunks&#039; are indivisible across compute nodes;&lt;br /&gt;
and therefore, this job can only be run on a single compute node.  It would therefore&lt;br /&gt;
never run on a system with only 4 cores per compute node and on those with only 8 core&lt;br /&gt;
per node SLURM would have to find a node with no other jobs running on it.  This is exactly&lt;br /&gt;
what we want for an OpenMP job, a one-to-one mapping of physically free cores to OpenMP&lt;br /&gt;
threads requested with no other jobs schedule by SLURM (or our of SLURM&#039;s purvey) to run and&lt;br /&gt;
compete for those 8 cores.&lt;br /&gt;
&lt;br /&gt;
Placement on a node with as many free physical cores as OpenMP threads is optimal for&lt;br /&gt;
OpenMP jobs because each processor assigned to an OpenMP job works within that single&lt;br /&gt;
program&#039;s memory space or image.  If the processors assigned by SLURM were on another&lt;br /&gt;
compute node they would not be usable; if they were assigned to another job on the same&lt;br /&gt;
compute they would not be fully available to the OpenMP program and would delay its&lt;br /&gt;
completion.&lt;br /&gt;
&lt;br /&gt;
Here, the selection of 8 cores will consume all the cores available on a single compute node on&lt;br /&gt;
ANDY.  This forces SLURM to find and allocate an entire compute node to the OpenMP&lt;br /&gt;
job.  In this case, the OpenMP job will also have all of the memory the compute has at its disposal&lt;br /&gt;
knowing that no other jobs will be assigned to it by SLURM.  If fewer cores were selected (say 4), SLURM&lt;br /&gt;
could place another job on the same ANDY compute node using as many as (4) cores.  This&lt;br /&gt;
job would compete for memory resources proportionally, but would have its own cores. SLURM offers&lt;br /&gt;
the &#039;pack:excl&#039; option to force exclusive placement even if the job uses less than all the cores on&lt;br /&gt;
the physical node.  One might wish to do this to run a single core job and have it use all the memory&lt;br /&gt;
on the compute node.&lt;br /&gt;
&lt;br /&gt;
One thing that should be kept in mind when defining SLURM resource requirements and in submitting&lt;br /&gt;
any SLURM script is that jobs with resource requests that are impossible to fulfill on the system where the&lt;br /&gt;
job is submitted will be &#039;&#039;&#039;queued forever and never run&#039;&#039;&#039;.  In our case here, we must know that the&lt;br /&gt;
system that we are submitting this job to has at least 8 processors (cores) available on a single&lt;br /&gt;
physical compute node.  At the HPC Center this job would run on either ANDY or SALK, but would&lt;br /&gt;
be queued indefinitely on any system that has fewer than 8 cores per physical node. This resource&lt;br /&gt;
mapping requirement applies to any resource that you might request in your SLURM script, not just cores.&lt;br /&gt;
Resource definition and mapping is discussed in greater detail in the SLURM section later in this document.&lt;br /&gt;
&lt;br /&gt;
Note that on SALK, the Cray XE6m system, the SLURM script would require the use of Cray&#039;s compute-node,&lt;br /&gt;
job launch command &#039;aprun&#039;, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N openMP_job&lt;br /&gt;
#SLURM -l select=1:ncpus=16:mem=32768mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -j oe&lt;br /&gt;
#SLURM -o openMP_job.out&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to your working directory in SLURM&lt;br /&gt;
# The SLURM_O_WORDIR variable is automatically filled with the path &lt;br /&gt;
# to the directory you submit your job from&lt;br /&gt;
&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# The SLURM_NODEFILE file contains the compute nodes assigned&lt;br /&gt;
# to the job by SLURM.  Uncommenting the next line will show them.&lt;br /&gt;
&lt;br /&gt;
cat $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# It is possible to set the number of threads to be used in&lt;br /&gt;
# an OpenMP program using the environment variable OMP_NUM_THREADS.&lt;br /&gt;
# This setting is not used here because the number of threads (8)&lt;br /&gt;
# was fixed inside the program itself in our example code.&lt;br /&gt;
# export OMP_NUM_THREADS=8&lt;br /&gt;
&lt;br /&gt;
aprun -n 1 -d 16 ./hello_omp.exe&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here, &#039;aprun&#039; is requesting that one process be allocated to a compute (&#039;-n 1&#039;) and that it be given&lt;br /&gt;
all 16 cores available on a single SALK compute node.  Because the production queue on SALK allows&lt;br /&gt;
no jobs requesting fewer than 16 cores, the &#039;-l select&#039; was also changed.  The define in the original C&lt;br /&gt;
source code should also best be change to set the number of OpenMP threads to 16 so that no &lt;br /&gt;
allocated cores are wasted on the compute node, as in:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#define NPROCS 16&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [[MPI, MPI Parallel Program Compilation, and SLURM Batch Job Submission]] ==&lt;br /&gt;
&lt;br /&gt;
The Message Passing Interface (MPI) is a hardware-independent parallel programming and communications library callable from C, C++, or Fortran.  Quoting from the MPI standard:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;MPI is a message-passing application programmer interface (API), together with protocol and semantic specifications for how its features must behave in any implementation.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MPI has become the &#039;&#039;de facto&#039;&#039; standard approach for parallel programming in HPC.  MPI is a collection of well-defined library calls composing an Applications Program Interface (API) for transfering data (packaged as messages) between completely independent processes with independent address spaces.  These processes might be running within a single physical node, as &#039;&#039;required&#039;&#039; above with OpenMP, or distributed across nodes connected by an interconnect such as GigaBit Ethernet or InfiniBand.  MPI communication is generally two-sided with both the sender and receiver of the data actively participating in the communication events.    Both point-to-point and collective communication (one-to-many; many-to-one; many-to-many) are supported. MPI&#039;s goals are high performance, scalability, and portability. MPI remains the dominant parallel programming model used in high-performance computing today, although it is sometimes criticized as difficult to program with.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;An Overview of the CUNY MPI Compilers and Batch Scheduler &#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;Sample Compilations and Production Batch Scripts&lt;br /&gt;
** Intel OpenMPI Parallel C&lt;br /&gt;
** Intel OpenMPI Parallel FORTRAN&lt;br /&gt;
** Intel OpenMPI SLURM Submit Script&lt;br /&gt;
** Portland Group OpenMPI Parallel C&lt;br /&gt;
** Portland Group OpenMPI Parallel FORTRAN&lt;br /&gt;
** Portland Group OpenMPI SLURM Submit Script&lt;br /&gt;
** Cray&#039;s Custom Gemini-based MPI Parallel C on SALK&lt;br /&gt;
** Cray MPI SLURM Submit Script&lt;br /&gt;
** GNU OpenMPI Parallel C&lt;br /&gt;
** GNU OpenMPI Parallel FORTRAN&lt;br /&gt;
** GNU OpenMPI SLURM Submit Script&lt;br /&gt;
** Other System-Local Custom Versions of the MPI Stack&lt;br /&gt;
* &#039;&#039;&#039;Setting Your Preferred MPI and Compiler Defaults &#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Getting the Right Interconnect for High Performance MPI &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The original MPI-1 release was not designed with any special features to support traditional shared-memory or distributed, shared-memory parallel architectures, and MPI-2 provides only limited distributed, shared-memory support with some one-sided, remote direct memory access routines (RDMA).  Nonetheless, MPI programs are regularly run on shared memory computers because the MPI model is an architecture-neutral parallel programming paradigm.  Writing parallel programs using the MPI model (as opposed to shared-memory models such as OpenMP described above) requires the careful partitioning of program data among the communicating processes to minimize the communication events that can sap the performance of parallel applications, especially when they are run at larger scale (with more processors).&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports several versions of MPI, including proprietary versions from Intel, SGI, and Cray; however, with the exception of the Cray, CUNY HPC Center systems by default have standardized on the public domain release of MPI called OpenMPI (not to be confused with OpenMP [yes, this is confusing]).  While this version will not always perform as well as the proprietary versions mentioned above, it is a reliable version that can be run on most HPC cluster systems.  Among the systems currently running at the CUNY HPC Center, only the Cray (SALK) does not support OpenMPI.  It instead uses a custom version of MPICH2 based on Cray&#039;s Gemini interconnect communication protocol.  In the discussion below, we therefore emphasize OpenMPI (except in our treatment of MPI on the Cray) because it can be run on almost every system the CUNY HPC Center supports.  Details on how to use Intel&#039;s and SGI&#039;s proprietary MPIs, and on using MPICH, another public domain version of MPI will be added later.&lt;br /&gt;
&lt;br /&gt;
OpenMPI (completely different from and not to be confused with OpenMP described above) is a project combining technologies and resources from several previous MPI projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI) with the stated aim of building the best freely available MPI library. OpenMPI represents the merger between three well-known MPI implementations:&lt;br /&gt;
&lt;br /&gt;
:*FT-MPI from the University of Tennessee&lt;br /&gt;
:*LA-MPI from Los Alamos National Laboratory&lt;br /&gt;
:*LAM/MPI from Indiana University&lt;br /&gt;
&lt;br /&gt;
with contributions from the PACX-MPI team at the University of Stuttgart. These four institutions comprise the founding members of the OpenMPI development team&lt;br /&gt;
which has grown to include many other active contributors and a very active user group.&lt;br /&gt;
&lt;br /&gt;
These MPI implementations were selected because OpenMPI developers thought that each excelled in one or more areas. The stated driving motivation behind OpenMPI is to bring the best ideas and technologies from the individual projects and create one world-class open source MPI implementation that excels in all areas. The OpenMPI project names several top-level goals:&lt;br /&gt;
&lt;br /&gt;
:*Create a free, open source software, peer-reviewed, production-quality complete MPI-2 implementation.&lt;br /&gt;
:*Provide extremely high, competitive performance (low latency or high bandwidth).&lt;br /&gt;
:*Directly involve the high-performance computing community with external development and feedback (vendors, 3rd party researchers, users, etc.).&lt;br /&gt;
:*Provide a stable platform for 3rd party research and commercial development.&lt;br /&gt;
:*Help prevent the &amp;quot;forking problem&amp;quot; common to other MPI projects.&lt;br /&gt;
:*Support a wide variety of high-performance computing platforms and environments.&lt;br /&gt;
&lt;br /&gt;
At the CUNY HPC Center, OpenMPI may be used to run jobs compiled with the Intel, PGI, or GNU compilers.  Two simple MPI programs, one written in C and another in Fortran are shown below as examples.  For details on programming in MPI, users should consider attending the CUNY HPC MPI workshop (3 days in length), refer to the many online tutorials, or read one of books on the subject.  A good online tutorial on MPI can be found at LLNL here [https://computing.llnl.gov/tutorials/mpi].  A tutorial on parallel programming in general can be found here [https://computing.llnl.gov/tutorials/parallel_comp].&lt;br /&gt;
&lt;br /&gt;
Parallel implementations of the &amp;quot;Hello world!&amp;quot; program in C and Fortran are presented here to give the reader a feel for the look of MPI code.  These sample codes can be used&lt;br /&gt;
as test cases in the sections below describing parallel application compilation and job submission.  Again, refer to the tutorials mentioned above or attend the CUNY HPC&lt;br /&gt;
Center MPI workshop for details on MPI programming.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Example 1.&#039;&#039;&#039; C Example (&#039;&#039;hello_mpi.c&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
/* include MPI specific data types and definitions */&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main (argc, argv)&lt;br /&gt;
int argc;&lt;br /&gt;
char *argv[];&lt;br /&gt;
{&lt;br /&gt;
 int rank, size;&lt;br /&gt;
&lt;br /&gt;
/* set up the MPI runtime environment */&lt;br /&gt;
 MPI_Init (&amp;amp;argc, &amp;amp;argv);  &lt;br /&gt;
&lt;br /&gt;
/* get current process id */&lt;br /&gt;
 MPI_Comm_rank (MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
&lt;br /&gt;
/* get number of processes */&lt;br /&gt;
 MPI_Comm_size (MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
&lt;br /&gt;
 printf( &amp;quot;Hello world from process %d of %d\n&amp;quot;, rank, size );&lt;br /&gt;
&lt;br /&gt;
/* break down the MPI runtime environment */&lt;br /&gt;
 MPI_Finalize();&lt;br /&gt;
&lt;br /&gt;
 return 0;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Example 2.&#039;&#039;&#039; Fortran example (&#039;&#039;hello_mpi.f90&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
program hello&lt;br /&gt;
&lt;br /&gt;
! include MPI specific data types and definitions&lt;br /&gt;
include &#039;mpif.h&#039;&lt;br /&gt;
&lt;br /&gt;
integer rank, size, ierror, tag, status(MPI_STATUS_SIZE)&lt;br /&gt;
&lt;br /&gt;
! set up the MPI runtime environment&lt;br /&gt;
call MPI_INIT(ierror)&lt;br /&gt;
&lt;br /&gt;
! get current process id&lt;br /&gt;
call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)&lt;br /&gt;
&lt;br /&gt;
! get number of processes&lt;br /&gt;
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)&lt;br /&gt;
&lt;br /&gt;
print*, &#039;Hello world from process &#039;, rank, &#039; of &#039;, size&lt;br /&gt;
&lt;br /&gt;
! break down the MPI runtime environment&lt;br /&gt;
call MPI_FINALIZE(ierror)&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An excellent and comprehensive tutorial on MPI with examples can be found at the Lawrence Berkeley National Lab web site: https://computing.llnl.gov/tutorials/mpi)&lt;br /&gt;
&lt;br /&gt;
=== An Overview of the CUNY MPI Compilers and Batch Scheduler ===&lt;br /&gt;
&lt;br /&gt;
SLURM Pro 11 is the batch scheduling and queuing system on all CUNY HPC Center systems.  It provides the collection of distributed compute resources used by all MPI (and other) jobs.  The SLURM Pro batch queues on all CUNY HPC Center systems (ANDY, SALK, and ZEUS) are largely identical in name and operation, although minimum and maximum job size and job number counts have been scaled to the size and intended job mix of each system.  For instance, on the Cray system (SALK) the queues very similar, but not identical and have been modified to emphasize large core-count jobs.  Still, submit scripts developed on one system should generally work on another with little or no modification.  Tuning for differences in the amount of memory per core or the number of cores per compute node can yield performance benefits or more rapid conversion from the queued (Q) to running (R) state.  The queue that production jobs should use on all these systems is &#039;&#039;production&#039;&#039; (&#039;&#039;production_qdr&#039;&#039; on ANDY) queue which will be described later).  Development work should use the &#039;&#039;development&#039;&#039; queue, and interactive work, should use the &#039;&#039;interactive&#039;&#039; queue.  The development and interactive queues have a small segment of each systems resources dedicated to them (except on ZEUS) and have a higher priority than the production queue.  For details on the SLURM Pro queues, please go to the detailed description of SLURM Pro presented below.&lt;br /&gt;
&lt;br /&gt;
Like the SLURM Pro queues, the default compiler and MPI stack are also the same on all four systems making possible the transfer of scripts from one system to the other with little or no editing (the Cray is again exception as mentioned above).  The default compilers are those released in February 2012 in Intel&#039;s Cluster Suite.  The default MPI currently in use is OpenMPI 1.5.5, released in the Spring of 2012, compiled with the afore mentioned Intel compilers.  Note that the default MPI was compiled with the site-default Intel compilers, but it is NOT Intel&#039;s MPI.  Intel&#039;s MPI is available if required, but is not the default.  In addition, the PGI 12.3 compiler suite (Spring 2012), which includes GPU acceleration via compiler directives, is also available, as is an OpenMPI 1.5.5 built with these PGI compilers.  A user can easily toggle from Intel to PGI and back with the help of CUNY provided scripts (see below). [NOTE: In the long run, the HPC Center intends to move to using the &#039;modules&#039; utility as SALK (the Cray) already does.  Finally,  the compiler and parallel applications stack provided by the Rocks 5.3 roll used to build the CUNY HPC cluster systems are available.  This includes gcc 4.1.2 (on ANDY and SALK the &#039;gcc&#039; defaults are 4.3.4 and 4.3.2 respectively), and either the OpenMPI 1.3.3 or MPICH2 MPI stack.  In addition, on ANDY SGI provides its own high-performance MPI stack called MPT.  Users seeking maximum scalability on ANDY should consider using SGI&#039;s MPT MPI stack.&lt;br /&gt;
&lt;br /&gt;
Using the OpenMPI-derived compile and run wrapper commands (&#039;&#039;mpicc, mpif90, mpiCC, mpirun, etc.&#039;&#039;) without full Unix paths will deliver the default Intel-compiled versions of OpenMPI 1.5.5.  (NOTE: The Cray system, SALK, uses its own wrapper commands (&amp;quot;cc&amp;quot;, &amp;quot;CC&amp;quot;, &amp;quot;ftn&amp;quot;) and its &#039;aprun&#039; run command to compile and initiate MPI and other parallel jobs.  It is presented in more detail in a Cray-specific section below). To use the other compiler stacks, care should be taken to update the PATH, MANPATH, and LD_LIBRARY_PATH variables in the user&#039;s environment on both the head and compute nodes.  The scripts &#039;&#039;/etc/profile.d/smpi-defaults.[sh,csh]&#039;&#039; on each system can be adapted and placed in the appropriate &amp;quot;init&amp;quot; files (.bashrc, .chsrc, etc) in the user&#039;s home directory to accomplish this.  (NOTE: Home directories on the head and compute nodes are identical, so setting new defaults on the head node should take care of this for all nodes).  The options required by OpenMPI&#039;s &#039;&#039;mpirun&#039;&#039; should be the same, regardless of the cluster used.  This should be true even on the systems with InfiniBand interconnects (ANDY) because of the way OpenMPI was built on the InfiniBand systems.  InfiniBand should be selected automatically on the InfiniBand systems (ANDY) and Gigabit Ethernet on the Ethernet systems (ZEUS), or &amp;quot;mpirun&amp;quot; will report that it was not available and select the Gigabit Interconnect as an alternative.  Sample, basic SLURM Pro batch scripts for running parallel jobs are provided here, but there is much more detail provided in the SLURM section below.&lt;br /&gt;
&lt;br /&gt;
===Sample Compilations and Production Batch Scripts===&lt;br /&gt;
&lt;br /&gt;
These examples could be used to compile the sample programs above and&lt;br /&gt;
should run consistently on all CUNY HPC Center systems except SALK, which&lt;br /&gt;
as mentioned has its own compiler wrappers.&lt;br /&gt;
&lt;br /&gt;
====OpenMPI (Intel compiler) Parallel C code ====&lt;br /&gt;
&lt;br /&gt;
Compilation (again, because the Intel-compiled version of OpenMPI is the default,&lt;br /&gt;
the full path shown here is NOT required):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/openmpi-intel/default/bin/mpicc -o hello_mpi.exe ./hello_mpi.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== OpenMPI (Intel compiler) Parallel FORTRAN code ====&lt;br /&gt;
&lt;br /&gt;
Compilation (again, because the Intel-compiled version of OpenMPI is the default,&lt;br /&gt;
the full path shown here is NOT required):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/openmpi-intel/default/bin/mpif90 -o hello_mpi.exe ./hello_mpi.f90&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== OpenMPI (Intel compiler) SLURM Submit Script ====&lt;br /&gt;
&lt;br /&gt;
The script below (my_mpi.job) requests that SLURM schedule an 8 processor (core) job &lt;br /&gt;
and allows SLURM to freely distributed the 8 processors requested to any free nodes. &lt;br /&gt;
For details on the meaning of all the options in this script please see the full section&lt;br /&gt;
SLURM Pro section below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N openmpi_intel&lt;br /&gt;
#SLURM -l select=8:ncpus=1&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to your working directory in SLURM&lt;br /&gt;
# The SLURM_O_WORKDIR variable is automatically filled with the&lt;br /&gt;
# path to the directory you submit your job from&lt;br /&gt;
&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
# The SLURM_NODEFILE file contains the compute nodes assigned&lt;br /&gt;
# to your job by SLURM.  Uncommenting the next line will show them.&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Assigned these nodes to your job: &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
cat $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Because OpenMPI compiled with the Intel compilers is the default,&lt;br /&gt;
# the full path here is NOT required.&lt;br /&gt;
&lt;br /&gt;
/share/apps/openmpi-intel/default/bin/mpirun -np 8 -machinefile $SLURM_NODEFILE ./hello_mpi.exe&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When submitted with &#039;qsub myjob&#039; a job ID is returned and output will be&lt;br /&gt;
written to the file called &#039;openmpi_intel.oXXXX&#039; where XXXX is the job ID.&lt;br /&gt;
Errors will be written to &#039;openmpi_intel.eXXXX&#039; where XXXX is the job ID.&lt;br /&gt;
&lt;br /&gt;
MPI hello world output:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: r1i0n6&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Assigned these nodes to your job: &lt;br /&gt;
&lt;br /&gt;
r1i0n6&lt;br /&gt;
r1i0n7&lt;br /&gt;
r1i0n8&lt;br /&gt;
r1i0n9&lt;br /&gt;
r1i0n10&lt;br /&gt;
r1i0n14&lt;br /&gt;
r1i1n0&lt;br /&gt;
r1i1n1&lt;br /&gt;
&lt;br /&gt;
Hello world from process 0 of 8&lt;br /&gt;
Hello world from process 7 of 8&lt;br /&gt;
Hello world from process 5 of 8&lt;br /&gt;
Hello world from process 4 of 8&lt;br /&gt;
Hello world from process 6 of 8&lt;br /&gt;
Hello world from process 3 of 8&lt;br /&gt;
Hello world from process 1 of 8&lt;br /&gt;
Hello world from process 2 of 8&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== OpenMPI (Portland Group compiler) Parallel C code ====&lt;br /&gt;
&lt;br /&gt;
Compilation (because this is NOT the default compiler, the full path is show). The full PGI environment would&lt;br /&gt;
still need to be toggled-on to ensure a clean compile and execution under the PGI environment [see below]):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/openmpi-pgi/default/bin/mpicc -o hello_mpi.exe ./hello_mpi.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====  OpenMPI (Portland Group compiler) Parallel FORTRAN code ====&lt;br /&gt;
&lt;br /&gt;
Compilation (again, because this is NOT the default compiler, the full path is show).  The full PGI environment would&lt;br /&gt;
would still need to be toggled-on to ensure a clean compile and execution under the PGI environment [see below]):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/openmpi-pgi/default/bin/mpif90 -o hello_mpi.exe ./hello_mpi.f90&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====  OpenMPI (Portland Group compiler) SLURM Submit Script ====&lt;br /&gt;
&lt;br /&gt;
The script below (my_mpi.job) requests that SLURM schedule an 8 processor (core) job &lt;br /&gt;
and allows SLURM to freely distributed the 8 processors requested to any free nodes. &lt;br /&gt;
(Note: the only real difference between this script and the Intel script above is in the&lt;br /&gt;
path to the &#039;&#039;mpirun&#039;&#039; command.)  For details on the meaning of all the options in this&lt;br /&gt;
script please see the full SLURM Pro section below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N openmpi_pgi&lt;br /&gt;
#SLURM -l select=8:ncpus=1&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to your working directory in SLURM&lt;br /&gt;
# The SLURM_O_WORKDIR variable is automatically filled with the &lt;br /&gt;
# path to the directory you submit your job from&lt;br /&gt;
&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
# The SLURM_NODEFILE file contains the compute nodes assigned&lt;br /&gt;
# to the job by SLURM.  Uncommenting the next line will show them.&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Assigned these nodes to your job: &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
cat $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Because OpenMPI PGI is NOT the default, the full path is show,&lt;br /&gt;
# but this does not guarantee a clean run. You must also ensure that&lt;br /&gt;
# the environment has been toggled to use PGI within your init files.&lt;br /&gt;
# (see section below).&lt;br /&gt;
&lt;br /&gt;
/share/apps/openmpi-pgi/default/bin/mpirun -np 8 -machinefile $SLURM_NODEFILE ./hello_mpi.exe&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When submitted with &#039;qsub myjob&#039; a job ID is returned and output will be&lt;br /&gt;
written to the file called &#039;openmpi_intel.oXXXX&#039; where XXXX is the job ID.&lt;br /&gt;
Errors will be written to &#039;openmpi_intel.eXXXX&#039; where XXXX is the job ID.&lt;br /&gt;
&lt;br /&gt;
MPI hello world output:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: r1i0n12&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Assigned these nodes to your job:&lt;br /&gt;
&lt;br /&gt;
r1i0n12&lt;br /&gt;
r1i0n7&lt;br /&gt;
r1i0n8&lt;br /&gt;
r1i0n9&lt;br /&gt;
r1i0n10&lt;br /&gt;
r1i0n14&lt;br /&gt;
r1i1n0&lt;br /&gt;
r1i1n1&lt;br /&gt;
&lt;br /&gt;
Hello world from process 0 of 8&lt;br /&gt;
Hello world from process 7 of 8&lt;br /&gt;
Hello world from process 5 of 8&lt;br /&gt;
Hello world from process 4 of 8&lt;br /&gt;
Hello world from process 6 of 8&lt;br /&gt;
Hello world from process 3 of 8&lt;br /&gt;
Hello world from process 1 of 8&lt;br /&gt;
Hello world from process 2 of 8&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Cray&#039;s Custom Gemini-based MPI Parallel C on SALK ====&lt;br /&gt;
&lt;br /&gt;
On SALK, the Cray compiler-linker wrappers (&amp;quot;cc&amp;quot;, &amp;quot;CC&amp;quot;, &amp;quot;ftn&amp;quot;) ensure that all required Cray-specific libraries are linked &lt;br /&gt;
in on the Cray in much the same way that the OpenMPI compiler-linker wrappers link in the correct OpenMPI libraries&lt;br /&gt;
on our other systems.   By default the environment is set to point to the Cray compilers and Cray programming environment&lt;br /&gt;
on SALK.  Unlike our other systems, SALK (the  Cray) uses the &#039;modules&#039; system of commands to manage, set, and reset&lt;br /&gt;
the environment.   Here is the list of the default modules that are loaded when a user logs in to SALK (as of 9-15-12):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
my.name@salk:~&amp;gt; module list&lt;br /&gt;
&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) modules/3.2.6.6&lt;br /&gt;
  2) nodestat/2.2-1.0301.23102.11.16.gem&lt;br /&gt;
  3) sdb/1.0-1.0301.25351.21.29.gem&lt;br /&gt;
  4) MySQL/5.0.64-1.0301.2899.20.4.gem&lt;br /&gt;
  5) lustre-cray_gem_s/1.8.4_2.6.27.48_0.12.1_1.0301.5737.21.1-1.0301.25792.1.99&lt;br /&gt;
  6) udreg/2.2-1.0301.2966.16.2.gem&lt;br /&gt;
  7) ugni/2.1-1.0301.2967.10.24.gem&lt;br /&gt;
  8) gni-headers/2.1-1.0301.3065.20.1.gem&lt;br /&gt;
  9) dmapp/3.0-1.0301.2968.22.24.gem&lt;br /&gt;
 10) xpmem/0.1-2.0301.25333.20.2.gem&lt;br /&gt;
 11) Base-opts/1.0.2-1.0301.24586.10.2.gem&lt;br /&gt;
 12) xtpe-network-gemini&lt;br /&gt;
 13) cce/8.0.0&lt;br /&gt;
 14) acml/4.4.0&lt;br /&gt;
 15) xt-libsci/11.0.04&lt;br /&gt;
 16) xt-mpich2/5.4.1&lt;br /&gt;
 17) pmi/3.0.0-1.0000.8661.28.2807.gem&lt;br /&gt;
 18) rca/1.0.0-2.0301.23291.6.22.gem&lt;br /&gt;
 19) xt-asyncpe/5.05&lt;br /&gt;
 20) atp/1.4.1&lt;br /&gt;
 21) PrgEnv-cray/3.1.61&lt;br /&gt;
 22) xtpe-mc8&lt;br /&gt;
 23) SLURM/11.3.0.121723&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The default path used by the &amp;quot;cc&amp;quot; wrapper on the Cray when the &#039;PrgEnv-cray&#039; programming&lt;br /&gt;
environment is loaded (item 21 above) is shown below.  Note that each compiler stack on the&lt;br /&gt;
Cray is stored in its own subdirectory in &#039;/opt&#039; on the Cray (/opt/cray, /opt/intel, /opt/pgi, /opt/gnu).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
my.name@salk:~&amp;gt; type cc&lt;br /&gt;
cc is /opt/cray/xt-asyncpe/5.05/bin/cc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compilation (because the Cray module is loaded, the full path is NOT required):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/opt/cray/xt-asyncpe/5.05/bin/cc -o hello_mpi.exe ./hello_mpi.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Cray&#039;s Custom Gemini-based MPI Parallel FORTRAN ====&lt;br /&gt;
&lt;br /&gt;
On SALK, the Cray compiler-linker wrappers (&amp;quot;cc&amp;quot;, &amp;quot;CC&amp;quot;, &amp;quot;ftn&amp;quot;) ensure that all required Cray-specific libraries are linked &lt;br /&gt;
in on the Cray in much the same way that the OpenMPI compiler-linker wrappers link in the correct OpenMPI libraries&lt;br /&gt;
on our other systems.   By default the environment is set to point to the Cray compilers and Cray programming environment&lt;br /&gt;
on SALK.  Unlike our other systems, SALK (the  Cray) uses the &#039;modules&#039; system of commands to manage, set, and reset&lt;br /&gt;
the environment.  Look above in the section on SALK C compilation for a listing of the modules that are loaded by default on&lt;br /&gt;
SALK.&lt;br /&gt;
&lt;br /&gt;
The default path used by the &amp;quot;ftn&amp;quot; wrapper on the Cray when the &#039;PrgEnv-cray&#039; programming environment is loaded (item 21&lt;br /&gt;
above) is shown below.  Note that each compiler stack supported on the Cray is stored in its own subdirectory in &#039;/opt&#039; on&lt;br /&gt;
the Cray (/opt/cray, /opt/intel, /opt/pgi, /opt/gnu).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
my.name@salk:~&amp;gt; type ftn&lt;br /&gt;
cc is /opt/cray/xt-asyncpe/5.05/bin/ftn&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compilation (because the Cray module is loaded the full path is NOT required):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/opt/cray/xt-asyncpe/5.05/bin/ftn -o hello_mpi.exe ./hello_mpi.f90&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Cray MPI SLURM Submit Script ====&lt;br /&gt;
&lt;br /&gt;
The script below (my_mpi.job) requests that SLURM schedule an 16 processor (core) job &lt;br /&gt;
and allows SLURM to freely distributed those 16 processors requested to any free compute&lt;br /&gt;
node.  This script asks for 16 cores instead of 8, because the smallest production job&lt;br /&gt;
allowed on the Cray (SALK) is 16 cores. (NOTE: smaller core-count and serial jobs can&lt;br /&gt;
only be run in the SLURM &#039;development&#039; queue on the Cray). For details on the meaning of&lt;br /&gt;
all the options in this script please see the full SLURM Pro section below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N craympi&lt;br /&gt;
#SLURM -l select=16:ncpus=1&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -j oe&lt;br /&gt;
#SLURM -o craympi.out&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to your working directory in SLURM&lt;br /&gt;
# The SLURM_O_WORKDIR variable is automatically filled with the &lt;br /&gt;
# path to the directory you submit your job from&lt;br /&gt;
&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
# The SLURM_NODEFILE file contains the compute nodes assigned&lt;br /&gt;
# to the job by SLURM.  Uncommenting the next line will show them.&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Assigned these nodes to your job: &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
cat $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# On the Cray one must use Cray&#039;s aprun command to submit SLURM&lt;br /&gt;
# jobs.  There is no &#039;mpirun&#039; command.  Cray users should carefully&lt;br /&gt;
# read the &#039;aprun&#039; man page.&lt;br /&gt;
&lt;br /&gt;
aprun -n 16 -N 16  ./hello_mpi.exe&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly to the other MPI SLURM scripts above, when submitted with &#039;qsub myjob&#039; a job ID&lt;br /&gt;
is returned, but unlike other systems here the output and error files will be merged and&lt;br /&gt;
written to the file &#039;craympi.out&#039;.  This difference is related to a SLURM problem that the Cray&lt;br /&gt;
has is written output to the default, un-explicitly named output and error files.&lt;br /&gt;
&lt;br /&gt;
Note, the presence of the &#039;aprun&#039; command which replaces &#039;mpirun&#039; in this Cray-specific&lt;br /&gt;
SLURM submit script.  More can be found on &#039;aprun&#039; and its command-line options below&lt;br /&gt;
and by entering the command  &#039;man aprun&#039; on the Cray (SALK).  The CUNY HPC Center staff&lt;br /&gt;
highly recommends that users read the &#039;aprun&#039; man page.  SALK requires the following&lt;br /&gt;
preparation step to place SLURM and its commands into your working environment.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
module load SLURM&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This should be placed into you shell&#039;s &#039;init&#039; file to ensure that it occurs automatically.&lt;br /&gt;
&lt;br /&gt;
MPI hello world output:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Hello world from process 2 of 8&lt;br /&gt;
Hello world from process 3 of 8&lt;br /&gt;
Hello world from process 4 of 8&lt;br /&gt;
Hello world from process 1 of 8&lt;br /&gt;
Hello world from process 6 of 8&lt;br /&gt;
Hello world from process 0 of 8&lt;br /&gt;
Hello world from process 5 of 8&lt;br /&gt;
Hello world from process 7 of 8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== OpenMPI (GNU compiler) Parallel C ====&lt;br /&gt;
Coming soon.&lt;br /&gt;
&lt;br /&gt;
==== OpenMPI (GNU compiler) Parallel FORTRAN ====&lt;br /&gt;
Coming soon.&lt;br /&gt;
&lt;br /&gt;
==== OpenMPI (GNU compiler) SLURM Submit Script ====&lt;br /&gt;
&lt;br /&gt;
This script sends SLURM an 8 processor (core) job allowing SLURM to freely distributed&lt;br /&gt;
the 8 processors to the least loaded nodes.  (Note: the only real difference between this&lt;br /&gt;
script and the Intel script above is in the path to the &#039;&#039;mpirun&#039;&#039; command.)  For details&lt;br /&gt;
on the meaning of all the options in this script please see the full SLURM Pro section below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N openmpi_gnu&lt;br /&gt;
#SLURM -l select=8:ncpus=1&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to your working directory in SLURM&lt;br /&gt;
# The SLURM_O_WORKDIR variable is automatically filled with the path &lt;br /&gt;
# to the directory you submit your job from&lt;br /&gt;
&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
# The SLURM_NODEFILE file contains the compute nodes assigned&lt;br /&gt;
# to the job by SLURM.  Uncommenting the next line will show them.&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Assigned these nodes to your job: &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
cat $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Because OpenMPI GNU is NOT the default, the full path is show here,&lt;br /&gt;
# but this does not guarantee a clean run. You must ensure that the&lt;br /&gt;
# environment has been toggled to GNU either in this batch script or&lt;br /&gt;
# within your init files (see section below).&lt;br /&gt;
&lt;br /&gt;
/opt/openmpi/bin/mpirun -np 8 -machinefile $SLURM_NODEFILE ./hello_mpi.exe&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When submitted with &#039;qsub myjob&#039; a job ID is returned and output will be&lt;br /&gt;
written to the file called &#039;openmpi_intel.oXXXX&#039; where XXXX is the job ID.&lt;br /&gt;
Errors will be written to &#039;openmpi_intel.eXXXX&#039; where XXXX is the job ID.&lt;br /&gt;
&lt;br /&gt;
MPI hello world output:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: r1i0n3&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Assigned these nodes to your job:&lt;br /&gt;
&lt;br /&gt;
r1i0n3&lt;br /&gt;
r1i0n7&lt;br /&gt;
r1i0n8&lt;br /&gt;
r1i0n9&lt;br /&gt;
r1i0n10&lt;br /&gt;
r1i0n14&lt;br /&gt;
r1i1n0&lt;br /&gt;
r1i1n1&lt;br /&gt;
&lt;br /&gt;
Hello world from process 0 of 8&lt;br /&gt;
Hello world from process 7 of 8&lt;br /&gt;
Hello world from process 5 of 8&lt;br /&gt;
Hello world from process 4 of 8&lt;br /&gt;
Hello world from process 6 of 8&lt;br /&gt;
Hello world from process 3 of 8&lt;br /&gt;
Hello world from process 1 of 8&lt;br /&gt;
Hello world from process 2 of 8&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE:&#039;&#039;&#039; The paths used above for the gcc version of OpenMPI apply only to ZEUS, which&lt;br /&gt;
has a GE interconnect.  On BOB, the path to the InfiniBand version of the gcc OpenMPI commands&lt;br /&gt;
and libraries is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr/mpi/gcc/openmpi-1.2.8/[bin,lib]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Other System-Local Custom Versions of the MPI Stack ====&lt;br /&gt;
&lt;br /&gt;
The GNU version of MPI is NOT available on ANDY, which is a SUSE 11.2 system from SGI, not based on Rocks&lt;br /&gt;
and Red Hat.  SGI&#039;s optimized version of MPI called MPT is available on the ANDY in addition to OpenMPI.  SGI&#039;s&lt;br /&gt;
MPT include files, libraries (needed at the link stage of a compilation), and its &#039;mpirun&#039; command are located in:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/opt/sgi/mpt/mpt-2.02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The distribution in this directory can be used to compile and run SGI MPT-based MPI applications.&lt;br /&gt;
&lt;br /&gt;
On occasion, using MPT is required.  For instance, applications distributed pre-built as binaries using MPT&lt;br /&gt;
need to be run with MPT as their MPI.  One such HPC Center application is ADF.  Here, we provide an example&lt;br /&gt;
SLURM script to run ADF jobs that sets the environment completely to use SGI&#039;s MPT version of MPI.&lt;br /&gt;
&lt;br /&gt;
This script can be adapted to run other applications compiled and linked to SGI&#039;s MPT library.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script runs a 4-cpu (core) ADF job with the 4 cpus &lt;br /&gt;
# packed onto a single compute node. This script requests&lt;br /&gt;
# only one half of the resources on an ANDY compute node&lt;br /&gt;
# (4 cores, 1 half its memory). &lt;br /&gt;
#&lt;br /&gt;
# The HCN_4P.inp deck in this directory is configured to work&lt;br /&gt;
# with these resources, although this computation is really &lt;br /&gt;
# too small to make full use of them. To increase or decrease&lt;br /&gt;
# the resources SLURM requests (cpus, memory, or disk) change the &lt;br /&gt;
# &#039;-l select&#039; line below and the parameter values in the input deck.&lt;br /&gt;
#&lt;br /&gt;
#SLURM -q production_qdr&lt;br /&gt;
#SLURM -N adf_4P_job&lt;br /&gt;
#SLURM -l select=1:ncpus=4:mem=11520mb:lscratch=400gb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
# set environment up to use SGI&#039;s MPT version of MPI&lt;br /&gt;
BASEPATH=/opt/sgi/mpt/mpt-2.02&lt;br /&gt;
&lt;br /&gt;
export PATH=${BASEPATH}/bin:${PATH}&lt;br /&gt;
export CPATH=${BASEPATH}/include:${CPATH}&lt;br /&gt;
export FPATH=${BASEPATH}/include:${FPATH}&lt;br /&gt;
export LD_LIBRARY_PATH=${BASEPATH}/lib:${LD_LIBRARY_PATH}&lt;br /&gt;
export LIBRARY_PATH=${BASEPATH}/lib:${LIBRARY_PATH}&lt;br /&gt;
export MPI_ROOT=${BASEPATH}&lt;br /&gt;
&lt;br /&gt;
# set the ADF root directory&lt;br /&gt;
export ADFROOT=/share/apps/adf&lt;br /&gt;
export ADFHOME=${ADFROOT}/2012.01&lt;br /&gt;
&lt;br /&gt;
# point ADF to the ADF license file&lt;br /&gt;
export SCMLICENSE=${ADFHOME}/license.txt&lt;br /&gt;
&lt;br /&gt;
# set up ADF scratch directory &lt;br /&gt;
export MY_SCRDIR=`whoami;date &#039;+%m.%d.%y_%H:%M:%S&#039;`&lt;br /&gt;
export MY_SCRDIR=`echo $MY_SCRDIR | sed -e &#039;s; ;_;&#039;`&lt;br /&gt;
export SCM_TMPDIR=/home/adf/adf_scr/${MY_SCRDIR}_$$&lt;br /&gt;
&lt;br /&gt;
mkdir -p $SCM_TMPDIR&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;The ADF scratch files for this job are in: ${SCM_TMPDIR}&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# check important paths&lt;br /&gt;
#type mpirun&lt;br /&gt;
#type adf&lt;br /&gt;
&lt;br /&gt;
# set the number processors to use in this job to 4&lt;br /&gt;
export NSCM=4&lt;br /&gt;
&lt;br /&gt;
# run the ADF job&lt;br /&gt;
echo &amp;quot;Starting ADF job ... &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
adf -n 4 &amp;lt; HCN_4P.inp &amp;gt; HCN_4P.out &lt;br /&gt;
&lt;br /&gt;
# name output files&lt;br /&gt;
mv logfile HCN_4P.logfile&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;ADF job finished ... &amp;quot;&lt;br /&gt;
&lt;br /&gt;
# clean up scratch directory files&lt;br /&gt;
/bin/rm -r $SCM_TMPDIR&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compiling MPI programs for SGI&#039;s MPT does not require the MPI wrapper commands that OpenMPI does, but instead&lt;br /&gt;
uses compiler flags offered directly to the native compilers (icc, pgcc).  For more information on using SGI&#039;s MPT&lt;br /&gt;
please inquire with the HPC Center staff or consult the SGI documentation.&lt;br /&gt;
&lt;br /&gt;
=== Setting Your Preferred MPI and Compiler Defaults ===&lt;br /&gt;
&lt;br /&gt;
As mentioned above the default version of MPI on the CUNY HPC Center clusters&lt;br /&gt;
is OpenMPI 1.5.5 compiled with the Intel compilers.  This default is set by scripts&lt;br /&gt;
in the /etc/profile.d directory (i.e. smpi-defaults.[sh,csh]).   When the mpi-wrapper&lt;br /&gt;
commands (mpicc, mpif90, mpirun, etc.) are used WITHOUT full path prefixes, these&lt;br /&gt;
Intel defaults will be invoked.  To use either of the other supported MPI environments&lt;br /&gt;
(OpenMPI compiled with the PGI compilers, or OpenMPI compiled with the GNU&lt;br /&gt;
compilers) users should set their local environment either in their home directory&lt;br /&gt;
init files (i.e. .bashrc, .cshrc) or manually in their batch scripts.  The script provided&lt;br /&gt;
below can be used for this.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;WARNING: Full path references by itself to non-default mpi-commands will NOT guarantee&#039;&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;error free compiles and runs because of the way OpenMPI references the environment it runs in!!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
CUNY HPC Center staff recommend fully toggling the site default environment away from&lt;br /&gt;
Intel to PGI or GNU when the non-default environments are preferred.  This can be done relatively&lt;br /&gt;
easily by commenting out the default and commenting in one of the preferred alternatives&lt;br /&gt;
referenced in the script provided below.  Users may copy the script &#039;&#039;smpi-default.sh&#039;&#039; &lt;br /&gt;
(or smpi-defaults-csh) from /etc/profile.d.  A copy is provided here for reference.  (NOTE:&lt;br /&gt;
This discussion does NOT apply on the Cray which uses the &#039;modules&#039; system to manage&lt;br /&gt;
its default applications environment.)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# general path settings &lt;br /&gt;
#PATH=/opt/openmpi/bin:$PATH&lt;br /&gt;
#PATH=/usr/mpi/gcc/openmpi-1.2.8/bin:$PATH&lt;br /&gt;
#PATH=/share/apps/openmpi-pgi/default/bin:$PATH&lt;br /&gt;
#PATH=/share/apps/openmpi-intel/default/bin:$PATH&lt;br /&gt;
export PATH&lt;br /&gt;
&lt;br /&gt;
# man path settings &lt;br /&gt;
#MANPATH=/opt/openmpi/share/man:$MANPATH&lt;br /&gt;
#MANPATH=/usr/mpi/gcc/openmpi-1.2.8/share/man:$MANPATH&lt;br /&gt;
#MANPATH=/share/apps/openmpi-pgi/default/share/man:$MANPATH&lt;br /&gt;
#MANPATH=/share/apps/openmpi-intel/default/share/man:$MANPATH&lt;br /&gt;
export MANPATH&lt;br /&gt;
&lt;br /&gt;
# library path settings &lt;br /&gt;
#LD_LIBRARY_PATH=/opt/openmpi/lib:$LD_LIBRARY_PATH&lt;br /&gt;
#LD_LIBRARY_PATH=/usr/mpi/gcc/openmpi-1.2.8/lib:$LD_LIBRARY_PATH&lt;br /&gt;
#LD_LIBRARY_PATH=/share/apps/openmpi-pgi/default/lib:$LD_LIBRARY_PATH&lt;br /&gt;
#LD_LIBRARY_PATH=/share/apps/openmpi-intel/default/lib:$LD_LIBRARY_PATH&lt;br /&gt;
export LD_LIBRARY_PATH&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By selectively commenting in the appropriate line in each paragraph above the&lt;br /&gt;
default PATH, MANPATH, and LD_LIBRARY_PATH can be set to the MPI compilation&lt;br /&gt;
stack that the user prefers.  The right place to do this is inside the user&#039;s .bashrc&lt;br /&gt;
file (or .cshrc file in the C-shell) in the user&#039;s HOME directory.  Once done, full path&lt;br /&gt;
references in the SLURM submit scripts listed above become unecessary and one script&lt;br /&gt;
would work for any compilation stack.&lt;br /&gt;
&lt;br /&gt;
This approach can be used to set the MPI environment to older non-default versions&lt;br /&gt;
of OpenMPI still in installed in /share/apps/openmpi-[intel,pgi].&lt;br /&gt;
&lt;br /&gt;
=== Getting the Right Interconnect for High Performance MPI ===&lt;br /&gt;
&lt;br /&gt;
A few comments should be made about interconnect control and selection under&lt;br /&gt;
OpenMPI.  First, this question applies ONLY to ANDY and HERBERT which have both InfiniBand&lt;br /&gt;
and Gigabit Ethernet interconnects.  InfiniBand provides both greater bandwidth and&lt;br /&gt;
lower latencies than Gigabit Ethernet, and it should be chosen on these systems because&lt;br /&gt;
it will deliver better performance at a given processor count and greater application&lt;br /&gt;
scalability.&lt;br /&gt;
&lt;br /&gt;
Both the Intel and Portland Group versions of OpenMPI installed on both ANDY and&lt;br /&gt;
HERBERT have been compiled to include the OpenIB  libraries.  This means that by default the&lt;br /&gt;
&#039;&#039;mpirun&#039;&#039; command will attempt to use the OpenIB libraries at runtime without any special&lt;br /&gt;
options.  If this cannot be done because no InfiniBand devices can be found, a runtime&lt;br /&gt;
error message will be reported in SLURM Pro&#039;s error file, and &#039;&#039;mpirun&#039;&#039; will attempt to use&lt;br /&gt;
other libraries and interfaces (namely GigaBit Ethernet, which is TCP/IP based) to run the&lt;br /&gt;
job.  If successful, the job will run to completion, but perform in a sub-optimal way.  &lt;br /&gt;
&lt;br /&gt;
To avoid this, or to establish with certainty which communication libraries and devices&lt;br /&gt;
are being used by your job, there are options that can be used with &#039;&#039;mpirun&#039;&#039; to force&lt;br /&gt;
the choice of one communication device, or the other.&lt;br /&gt;
&lt;br /&gt;
To &#039;&#039;&#039;force&#039;&#039;&#039; the job to use the OpenIB interface (ib0) or fail, use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpirun  -mca btl openib,self -np  8 -machinefile $SLURM_NODEFILE ./hello_mpi.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To &#039;&#039;&#039;force&#039;&#039;&#039; the job to use the GigaBit Ethernet interface (eth0) or fail, use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpirun  -mca btl tcp,self -np  8 -machinefile $SLURM_NODEFILE ./hello_mpi.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note, this discussion does not apply on the Cray which uses its own proprietary Gemini&lt;br /&gt;
interconnect. It is worth noting that the Cray&#039;s interconnect is not switched-based like&lt;br /&gt;
the other systems, but rather a 2D toroidal mesh for which being aware of job placement&lt;br /&gt;
on the mesh can be an important consideration when tuning a job for performance&lt;br /&gt;
at scale.&lt;br /&gt;
&lt;br /&gt;
== [[GPU Parallel Program Compilation and SLURM Job Submission ]]==&lt;br /&gt;
The CUNY HPC Center supports computing with Graphics Processing Units (GPUs).  GPUs can be thought of&lt;br /&gt;
of as highly parallel co-processors (or accelerators) connected to a node&#039;s CPUs via a PCI Express bus.  The&lt;br /&gt;
HPC Center provides GPU accelerators on two systems,  on PENZIAS. It has 144 NVIDIA Tesla K20m GPUs (two per every compute node in the rack).  &lt;br /&gt;
Specifications of each GPU (as found by the &#039;deviceQuery&#039; utility) are as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./deviceQuery Starting...&lt;br /&gt;
&lt;br /&gt;
 CUDA Device Query (Runtime API) version (CUDART static linking)&lt;br /&gt;
&lt;br /&gt;
Detected 2 CUDA Capable device(s)&lt;br /&gt;
&lt;br /&gt;
Device 0: &amp;quot;Tesla K20m&amp;quot;&lt;br /&gt;
  CUDA Driver Version / Runtime Version          5.5 / 5.5&lt;br /&gt;
  CUDA Capability Major/Minor version number:    3.5&lt;br /&gt;
  Total amount of global memory:                 4800 MBytes (5032706048 bytes)&lt;br /&gt;
  (13) Multiprocessors, (192) CUDA Cores/MP:     2496 CUDA Cores&lt;br /&gt;
  GPU Clock rate:                                706 MHz (0.71 GHz)&lt;br /&gt;
  Memory Clock rate:                             2600 Mhz&lt;br /&gt;
  Memory Bus Width:                              320-bit&lt;br /&gt;
  L2 Cache Size:                                 1310720 bytes&lt;br /&gt;
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)&lt;br /&gt;
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers&lt;br /&gt;
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers&lt;br /&gt;
  Total amount of constant memory:               65536 bytes&lt;br /&gt;
  Total amount of shared memory per block:       49152 bytes&lt;br /&gt;
  Total number of registers available per block: 65536&lt;br /&gt;
  Warp size:                                     32&lt;br /&gt;
  Maximum number of threads per multiprocessor:  2048&lt;br /&gt;
  Maximum number of threads per block:           1024&lt;br /&gt;
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)&lt;br /&gt;
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)&lt;br /&gt;
  Maximum memory pitch:                          2147483647 bytes&lt;br /&gt;
  Texture alignment:                             512 bytes&lt;br /&gt;
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)&lt;br /&gt;
  Run time limit on kernels:                     No&lt;br /&gt;
  Integrated GPU sharing Host Memory:            No&lt;br /&gt;
  Support host page-locked memory mapping:       Yes&lt;br /&gt;
  Alignment requirement for Surfaces:            Yes&lt;br /&gt;
  Device has ECC support:                        Enabled&lt;br /&gt;
  Device supports Unified Addressing (UVA):      Yes&lt;br /&gt;
  Device PCI Bus ID / PCI location ID:           4 / 0&lt;br /&gt;
  Compute Mode:&lt;br /&gt;
     &amp;lt; Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) &amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each of 144 GPU devices shows performance of 3,524 GFLOPS. K20m are installed on the motherboard and connected via PCIe 2.0 x16 interface. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039; GPU Parallel Programming with the Portland Group Compiler Directives &#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039; Submitting Portland Group, GPU-Parallel Programs Using SLURM &#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039; GPU Parallel Programming with NVIDIA&#039;s CUDA C or PGI&#039;s CUDA Fortran Programming Models &#039;&#039;&#039;&lt;br /&gt;
** A Sample CUDA GPU Parallel Program Written in NVIDIA&#039;s CUDA C &lt;br /&gt;
** A Sample CUDA GPU Parallel Program Written in PGI&#039;s CUDA Fortran &lt;br /&gt;
*&#039;&#039;&#039; Submitting CUDA (C or Fortran), GPU-Parallel Programs Using SLURM &#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039; Submitting CUDA (C or Fortran), GPU-Parallel Programs and Functions Using MATLAB &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two distinct parallel programming approaches for the HPC Center&#039;s GPU resources are described here.&lt;br /&gt;
The first (a compiler directive&#039;s based extension available in the Portland Group&#039;s Inc. (PGI) C and Fortran&lt;br /&gt;
compilers) delivers ease of use at the expense of somewhat less than highly tuned performance.  The second&lt;br /&gt;
(NVIDIA&#039;s Compute Unified Device Architecture, CUDA C or PGI&#039;s CUDA Fortran GPU programming model)&lt;br /&gt;
provides the ability within C or Fortran to more directly address the GPU hardware for better performance,&lt;br /&gt;
but at the expense of a somewhat greater programming effort.  We will introduce both approaches here,&lt;br /&gt;
and present the basic steps for GPU parallel program compilation and job submission using SLURM for both&lt;br /&gt;
as well.&lt;br /&gt;
&lt;br /&gt;
=== GPU Parallel Programming with the Portland Group Compiler Directives ===&lt;br /&gt;
The Portland Group, Inc. (PGI) has taken the lead in building a general purpose, accelerated parallel computing&lt;br /&gt;
model into its compilers.  Programmers can access this new technology at CUNY using PGI&#039;s compiler, which&lt;br /&gt;
supports the use of GPU-specific, compiler directives in standard C and Fortran programs.  Compiler directives&lt;br /&gt;
simplify the programmer&#039;s job of mapping parallel kernels onto accelerator hardware and do so without compromising&lt;br /&gt;
the portability of the user&#039;s application. Such a directives-parallelized code can be compiled and run on either the&lt;br /&gt;
CPU-GPU together, or on the CPU alone.  At this time, PGI supports the current, HPC-oriented GPU accelerator&lt;br /&gt;
products from NVIDIA, but intends to extend its compiler-directives-based approach in the future to other&lt;br /&gt;
accelerators.&lt;br /&gt;
&lt;br /&gt;
The simplicity of coding with directives is illustrated here with a sample code (&#039;vscale.c&#039;) that does a simple&lt;br /&gt;
iteration independent scaling of a vector on both the GPU and CPU in single precision and compares the results:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
        #include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
        #include &amp;lt;assert.h&amp;gt;&lt;br /&gt;
        &lt;br /&gt;
        int main( int argc, char* argv[] )&lt;br /&gt;
        {&lt;br /&gt;
            int n;      /* size of the vector */&lt;br /&gt;
            float *restrict a;  /* the vector */&lt;br /&gt;
            float *restrict r;   /* the results */&lt;br /&gt;
            float *restrict e;  /* expected results */&lt;br /&gt;
            int i;&lt;br /&gt;
&lt;br /&gt;
            /* Set array size */&lt;br /&gt;
            if( argc &amp;gt; 1 )&lt;br /&gt;
                n = atoi( argv[1] );&lt;br /&gt;
            else&lt;br /&gt;
                n = 100000;&lt;br /&gt;
            if( n &amp;lt;= 0 ) n = 100000;&lt;br /&gt;
        &lt;br /&gt;
            /* Allocate memory for arrays */&lt;br /&gt;
            a = (float*)malloc(n*sizeof(float));&lt;br /&gt;
            r = (float*)malloc(n*sizeof(float));&lt;br /&gt;
            e = (float*)malloc(n*sizeof(float));&lt;br /&gt;
&lt;br /&gt;
            /* Initialize array */&lt;br /&gt;
            for( i = 0; i &amp;lt; n; ++i ) a[i] = (float)(i+1);&lt;br /&gt;
        &lt;br /&gt;
            /* Scale array and mark for acceleration */&lt;br /&gt;
            #pragma acc region&lt;br /&gt;
            {&lt;br /&gt;
                for( i = 0; i &amp;lt; n; ++i ) r[i] = a[i]*2.0f;&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
            /* Scale array on the host to compare */&lt;br /&gt;
                for( i = 0; i &amp;lt; n; ++i ) e[i] = a[i]*2.0f;&lt;br /&gt;
&lt;br /&gt;
            /* Check the results and print */&lt;br /&gt;
            for( i = 0; i &amp;lt; n; ++i ) assert( r[i] == e[i] );&lt;br /&gt;
&lt;br /&gt;
            printf( &amp;quot;%d iterations completed\n&amp;quot;, n );&lt;br /&gt;
&lt;br /&gt;
            return 0;&lt;br /&gt;
        }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this simple example, the only code and instruction to the compiler required to direct this vector scaling kernel&lt;br /&gt;
to the GPU is the compiler directive:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 #pragma acc region&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
that precedes the second C &#039;for&#039; loop.   A user can build a GPU-ready executable (&#039;c1.exe&#039; in this case)&lt;br /&gt;
for execution on &#039;&#039;ZEUS&#039;&#039; or &#039;&#039;ANDY&#039;&#039; with the following compilation statement:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pgcc -o vscale.exe vscale.c -ta=nvidia -Minfo=accel -fast&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The option &#039;-ta=nvidia&#039; declares to the compiler what the destination hardware acceleration&lt;br /&gt;
technology is going to be (PGI&#039;s model is intended to be general, although its implementation&lt;br /&gt;
for NVIDIAs GPU accelerators is the most advanced to date), and the &#039;-Minfo=accel&#039; option&lt;br /&gt;
requests output describing what the compiler did to accelerate the code.  This output is&lt;br /&gt;
included here:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
main:&lt;br /&gt;
     29, Generating copyout(r[:n-1])&lt;br /&gt;
           Generating copyin(a[:n-1])&lt;br /&gt;
           Generating compute capability 1.0 binary&lt;br /&gt;
           Generating compute capability 2.0 binary&lt;br /&gt;
     31, Loop is parallelizable&lt;br /&gt;
           Accelerator kernel generated&lt;br /&gt;
           31, #pragma acc for parallel, vector(256) /* blockIdx.x threadIdx.x */&lt;br /&gt;
               CC 1.0 :   3 registers; 48 shared,   4 constant, 0 local memory bytes;   100% occupancy&lt;br /&gt;
               CC 2.0 : 10 registers;   4 shared, 60 constant, 0 local memory bytes;   100% occupancy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the output, the compiler explains where and what it intends to copy to (and from) CPU memory to&lt;br /&gt;
GPU accelerator memory.  It explains that the C &#039;for&#039; loop has no loop iteration dependencies and can be run&lt;br /&gt;
on the accelerator in parallel.  It also indicates the vector length (256, the block size of the work to be done&lt;br /&gt;
on the GPU). Because the array pointer &#039;a[]&#039; is declared &#039;restricted&#039;, it will point only into &#039;a&#039;.  This ensures&lt;br /&gt;
the compiler that pointer-alias-related, loop dependencies cannot occur.  &lt;br /&gt;
&lt;br /&gt;
The Portland Group C and Fortran Programming Guides provide a complete description its accelerator compiler&lt;br /&gt;
directives programming model [http://www.pgroup.com/lit/whitepapers/pgi_accel_prog_model_1.3.pdf].  Additional&lt;br /&gt;
introductory material can be found in four PGI white paper tutorials (part1, part2, part3, part4), here:&lt;br /&gt;
[http://www.pgroup.com/lit/articles/insider/v1n1a1.htm], [http://www.pgroup.com/lit/articles/insider/v1n2a1.htm],  [http://www.pgroup.com/lit/articles/insider/v1n3a1.htm], [http://www.pgroup.com/lit/insider/pginsider_v2n1.htm].&lt;br /&gt;
&lt;br /&gt;
=== Submitting Portland Group, GPU-Parallel Programs Using SLURM ===&lt;br /&gt;
GPU job submission is very much like other batch job submission under SLURM.  Here is a SLURM example script that can be used to run the GPU-ready executable&lt;br /&gt;
created above on &#039;&#039;PENZIAS&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N pgi_gpu_job&lt;br /&gt;
#SLURM -l select=1:ncpus=1:ngpus=1&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin PGI GPU Compiler Directives-based run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
./vscale.exe&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   PGI GPU Compiler Directives-based run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The only difference from the non-gpu submit script is in the &amp;quot;select&amp;quot; statement. By adding &amp;quot;ngpus=1&amp;quot; directive user instructs SLURM to allocate 1 GPU device per chunk. Altogether 1 CPU and 1 GPU are requested in the above script. Consider different script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N pgi_gpu_job&lt;br /&gt;
#SLURM -l select=4:ncpus=4:ngpus=2&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin PGI GPU Compiler Directives-based run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
./vscale.exe&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   PGI GPU Compiler Directives-based run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here SLURM is instructed to allocate 4 chunks of resources with each chunk having 4 CPUs and 2 GPUs totaling in 16  CPUs and 8 GPUs. &lt;br /&gt;
Note that ngpus parameter may only take values of 0, 1 or 2: there are 2 GPUs per compute node, and therefore if asked for more then 2 GPUs per &lt;br /&gt;
chunk SLURM will fail to find a compute node that matches such request (SLURM chunks are &#039;atomic&#039; with respect to actual hardware). This is important limitation one needs to keep in mind while creating SLURM scripts. &lt;br /&gt;
&lt;br /&gt;
These are the essential SLURM script requirements for submitting any GPU-Device-ready executable.  This applies to&lt;br /&gt;
the one with compiler directives compiled above, but might also be used to run GPU-ready executable code generated&lt;br /&gt;
from native CUDA C or Fortran code as described in the next example.  In the case above, the PGI compiler-directive marked&lt;br /&gt;
loops will run in parallel on a single NVIDIA GPU after the data in array &#039;a[]&#039; is copied to it across the PCI-Express bus.&lt;br /&gt;
&lt;br /&gt;
Other variations are possible, including jobs that combine MPI or OpenMP (or even both of these) and GPU parallel&lt;br /&gt;
programming in a single GPU-SMP-MPI multi-parallel job.  There is not enough space to cover these approaches&lt;br /&gt;
here, but the HPC Center staff has created code examples that illustrate these multi-parallel programming model approaches&lt;br /&gt;
and will provide them to interested users at the HPC Center.&lt;br /&gt;
&lt;br /&gt;
=== GPU Parallel Programming with NVIDIA&#039;s CUDA C or PGI&#039;s CUDA Fortran Programming Models ===&lt;br /&gt;
The previous section described the recent advances in compiler development from PGI that make utilizing the data-&lt;br /&gt;
parallel compute power of the GPU more accessible to C and Fortran programmers.  This trend has continued with the&lt;br /&gt;
definition and adoption of the OpenACC standard by PGI, Cray, and CAPS.  OpenACC is an OpenMP-like portable&lt;br /&gt;
standard for obtaining accelerated performance on GPUs and other accelerators using compiler directives.  It based&lt;br /&gt;
on the approaches already developed by PGI, Cray, and CAPS over the last several years.&lt;br /&gt;
&lt;br /&gt;
Yet, for over 5 years NVIDIA has offered and continued to develop its Compute Unified Device Architecture (CUDA), and&lt;br /&gt;
its direct, NVIDIA-GPU-specific programming environment for C programmers.  More recently, PGI has released CUDA&lt;br /&gt;
Fortran jointly with NVIDIA offering a second language choice for programming NVIDIA GPUs using CUDA.&lt;br /&gt;
&lt;br /&gt;
In this section, the basics of compiling and running CUDA C and CUDA Fortran applications at the CUNY HPC Center&lt;br /&gt;
are covered.  The current default version of CUDA in use at the CUNY HPC Center as of 11-27-12 is CUDA release 5.0. &lt;br /&gt;
&lt;br /&gt;
CUDA is a complete programming environment that includes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1.  A modified version of the C or Fortran programming language for programming the GPU Device and&lt;br /&gt;
   moving data between the CPU Host and the GPU Device.&lt;br /&gt;
&lt;br /&gt;
2. A runtime environment and translator that generates and runs device-specific, CPU-GPU&lt;br /&gt;
  executables from more generic, single, mixed-instruction-set executables.&lt;br /&gt;
&lt;br /&gt;
3. A Software Development Kit (SDK), HPC application-related libraries, and documentation&lt;br /&gt;
  to support the development of CUDA applications.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NVIDIA and PGI have put a lot of effort into making CUDA a flexible, full-featured, and high-performance program-&lt;br /&gt;
ming environment similar to those in use in HPC to program CPUs.  However, CUDA is still a 2-instruction-set,&lt;br /&gt;
CPU-GPU programming model that must manage two separate memory spaces linked only by the compute node&#039;s&lt;br /&gt;
PCI-Express bus.  As such, programming GPUs using CUDA is more complicated than PGI&#039;s compiler-directives-based&lt;br /&gt;
approach presented above which hides the many details of this approach from the programmer.  Still, CUDA&#039;s more explicit,&lt;br /&gt;
close-to-the-hardware approach offers CUDA programmers the chance to get the best possible performance from&lt;br /&gt;
the GPU for their particular application by carefully controlling SM register use and occupancy.&lt;br /&gt;
&lt;br /&gt;
Adapting a current application or writing a new one for the CUDA CPU-GPU programming model involves&lt;br /&gt;
dividing that application into those parts that are highly data-parallel and better suited for the GPU Device (the&lt;br /&gt;
so-called GPU Device code, or device kernel(s)) and those parts that have little or limited data-parallelism and&lt;br /&gt;
are better suited for execution on the CPU Host (the driver code, or the CPU Host code).  In addition, one should inventory&lt;br /&gt;
the amount of data that must be moved between the CPU Host and GPU Device relative to the amount of GPU&lt;br /&gt;
computation for each candidate data-parallel GPU kernel.  Kernels whose compute-to-communication time ratios&lt;br /&gt;
are too small should be executed on the CPU.&lt;br /&gt;
&lt;br /&gt;
With the natural GPU-CPU divisions in the application identified, what were once host kernels (usually substantial&lt;br /&gt;
looping sections in the host code) must be recoded in CUDA C or Fortran for the GPU Device.   Also, Host CPU-&lt;br /&gt;
to-GPU interface code for transferring data to and from the GPU, and for calling the GPU kernel must be written.&lt;br /&gt;
Once these steps are completed and the host driver and GPU kernel code are compiled with NVIDIA&#039;s &#039;nvcc&#039; compiler&lt;br /&gt;
driver (or PGI CUDA Fortran compiler), the result is a fully executable mixed CPU-GPU binary (single file, dual instruction set)&lt;br /&gt;
that typically does the following for each GPU kernel it calls:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1.  Allocates memory for required CPU source and destination arrays on the CPU Host.&lt;br /&gt;
&lt;br /&gt;
2.  Allocates memory for GPU input, intermediate, and result arrays on the GPU Device.&lt;br /&gt;
&lt;br /&gt;
3.  Initializes and/or assigns values to these arrays.&lt;br /&gt;
&lt;br /&gt;
4.  Copies any required CPU Host input data to the GPU Device.&lt;br /&gt;
&lt;br /&gt;
5.  Defines the GPU Device grid, block, and thread dimensions for each GPU kernel.&lt;br /&gt;
&lt;br /&gt;
6.  Calls (executes) the GPU Device kernel code from the CPU Host driver code.&lt;br /&gt;
&lt;br /&gt;
7.  Copies the required GPU Device results back the CPU Host.&lt;br /&gt;
&lt;br /&gt;
8.  Frees (and perhaps zeroes) memory on the CPU Host and GPU Device that is no longer needed.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The details of the actual coding process are beyond the scope of the discussion here, but are treated in depth&lt;br /&gt;
in NVIDIA&#039;s CUDA C Training Class notes, in NVIDIA&#039;s CUDA C Programming Guide, and in PGI&#039;s CUDA Fortran Programming Guide [http://developer.download.nvidia.com/compute/cuda/3_0/toolkit/docs/NVIDIA_CUDA_ProgrammingGuide.pdf],&lt;br /&gt;
[http://www.pgroup.com/doc/pgicudaforug.pdf] and in many tutorials and articles on the web &lt;br /&gt;
[http://www.pgroup.com/lit/articles/insider/v1n3a2.htm].&lt;br /&gt;
&lt;br /&gt;
==== A Sample CUDA GPU Parallel Program Written in NVIDIA&#039;s CUDA C ====&lt;br /&gt;
Here, we present a basic example of a CUDA C application that includes code for all the steps outlined above.  It fills&lt;br /&gt;
and then increments a 2D array on the GPU Device and returns the results to the CPU Host for printing.  The example&lt;br /&gt;
code is presented in two parts--the CPU Host setup or driver code, and the GPU Device or kernel code.  This&lt;br /&gt;
example comes from the suite of examples used by NVIDIA in its CUDA Training Class notes.  There are many&lt;br /&gt;
more involved and HPC-relevant examples (matrixMul, binomialOptions, simpleCUFFT, etc.) provided in NVIDIA&#039;s&lt;br /&gt;
Software Development Toolkit (SDK) which any user of CUDA may download and install in their home directory&lt;br /&gt;
on their CUNY HPC Center account. &lt;br /&gt;
&lt;br /&gt;
The basic example&#039;s CPU Host CUDA C code or driver, simple3_host.cu, is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
extern __global__ void mykernel(int *d_a, int dimx, int dimy);&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char *argv[])&lt;br /&gt;
{&lt;br /&gt;
   int dimx = 16;&lt;br /&gt;
   int dimy = 16;&lt;br /&gt;
   int num_bytes = dimx * dimy * sizeof(int);&lt;br /&gt;
&lt;br /&gt;
   /* Initialize Host and Device Pointers */&lt;br /&gt;
   int *d_a = 0, *h_a = 0;&lt;br /&gt;
&lt;br /&gt;
   /* Allocate memory on the Host and Device */&lt;br /&gt;
   h_a = (int *) malloc(num_bytes);&lt;br /&gt;
   cudaMalloc( (void**) &amp;amp;d_a, num_bytes);&lt;br /&gt;
&lt;br /&gt;
   if( 0 == h_a || 0 == d_a ) {&lt;br /&gt;
       printf(&amp;quot;couldn&#039;t allocate memory\n&amp;quot;); return 1;&lt;br /&gt;
   }&lt;br /&gt;
&lt;br /&gt;
   /* Initialize Device memory */&lt;br /&gt;
   cudaMemset(d_a, 0, num_bytes);&lt;br /&gt;
&lt;br /&gt;
   /* Define kernel grid and block size */&lt;br /&gt;
   dim3 grid, block;&lt;br /&gt;
   block.x = 4;&lt;br /&gt;
   block.y = 4;&lt;br /&gt;
   grid.x = dimx/block.x;&lt;br /&gt;
   grid.y = dimy/block.y;&lt;br /&gt;
&lt;br /&gt;
   /* Call Device kernel, asynchronously */&lt;br /&gt;
   mykernel&amp;lt;&amp;lt;&amp;lt;grid,block&amp;gt;&amp;gt;&amp;gt;(d_a, dimx, dimy);&lt;br /&gt;
&lt;br /&gt;
   /* Copy results from the Device to the Host*/&lt;br /&gt;
   cudaMemcpy(h_a,d_a,num_bytes,cudaMemcpyDeviceToHost);&lt;br /&gt;
&lt;br /&gt;
   /* Print out the results from the Host */&lt;br /&gt;
   for(int row = 0; row &amp;lt; dimy; row++) {&lt;br /&gt;
      for(int col = 0; col &amp;lt; dimx; col++) {&lt;br /&gt;
         printf(&amp;quot;%d&amp;quot;, h_a[row*dimx+col]);&lt;br /&gt;
      }&lt;br /&gt;
      printf(&amp;quot;\n&amp;quot;);&lt;br /&gt;
   }&lt;br /&gt;
&lt;br /&gt;
   /* Free the allocated memory on the Device and Host */&lt;br /&gt;
   free(h_a);&lt;br /&gt;
   cudaFree(d_a);&lt;br /&gt;
&lt;br /&gt;
   return 0;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The GPU Device CUDA C kernel code, simple3_device.cu, is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
__global__ void mykernel(int *a, int dimx, int dimy)&lt;br /&gt;
{&lt;br /&gt;
   int ix = blockIdx.x*blockDim.x + threadIdx.x;&lt;br /&gt;
   int iy = blockIdx.y*blockDim.y + threadIdx.y;&lt;br /&gt;
   int idx = iy * dimx + ix;&lt;br /&gt;
&lt;br /&gt;
   a[idx] = a[idx] + 1;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using these simple CUDA C routines (or code that you have developed yourself), one can easily create a CPU-GPU&lt;br /&gt;
executable that is ready to run on one of the CUNY HPC Center&#039;s GPU-enabled systems (PENZIAS).&lt;br /&gt;
&lt;br /&gt;
Because of the variety of source and destination code states that the CUDA programming environment must source,&lt;br /&gt;
generate, and manage, NVIDIA has provided a master program, &#039;nvcc&#039;, called the &#039;&#039;&#039;CUDA compiler driver&#039;&#039;&#039; to handle all&lt;br /&gt;
of these possible compilation phase translations as well as other compiler driver options.  The detailed use of &#039;nvcc&#039; is&lt;br /&gt;
documented on &amp;quot;PENZIAS&amp;quot; by &#039;man nvcc&#039; and also in NVIDIA&#039;s Compiler Driver Manual&lt;br /&gt;
[http://www.google.com/search?hl=en&amp;amp;q=CUDA+Compiler+Driver+December&amp;amp;aq=f&amp;amp;aqi=&amp;amp;aql=&amp;amp;oq=&amp;amp;gs_rfai=].&lt;br /&gt;
NOTE: Compiling CUDA Fortran programs can be accomplished using PGI&#039;s standard release Fortran compiler making sure&lt;br /&gt;
that the CUDA Fortran code is marked with the &#039;.CUF&#039; suffix as in &#039;matmul.CUF&#039;.  More on this a bit later.&lt;br /&gt;
&lt;br /&gt;
Among the &#039;nvcc&#039; command&#039;s many groups of options are a series of options that determine what source files &lt;br /&gt;
&#039;nvcc&#039; should expect to be offered and what destination files it is expected to produce.  A sampling of these &lt;br /&gt;
&#039;&#039;&#039;compilation phase&#039;&#039;&#039; options includes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--compile    or -c       ::    Compile whatever input files are offered (.c, .cc, .cpp, .cu) into object files (*.o file).&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--ptx      or -ptx     ::    Compile all .gpu or .cu input files into device-only .ptx files.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--link     or -link     ::    Compile whatever input files are offered into an executable (the default).&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--lib     or -lib        ::    Compile whatever input files are offered into a library file (*.a file).&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a typical compilation to an executable, the third and default option above (which is to supply nothing or simply&lt;br /&gt;
the string &#039;-link&#039;) is used. There are a multitude of other &#039;nvcc&#039; options that control file and path specifications&lt;br /&gt;
for libraries and include files, control and pass options to &#039;nvcc&#039; companion compilers and linkers (this includes&lt;br /&gt;
much of the gcc stack, which must be in the user&#039;s path for &#039;nvcc&#039; to work correctly), and for code generation,&lt;br /&gt;
among other things.  For a complete description, please see the manual referred to above or the &#039;nvcc&#039; man page.&lt;br /&gt;
All this complexity relates to the fact that with CUDA one is working in a multi-source and meta-code environment.&lt;br /&gt;
&lt;br /&gt;
Our concern here is generating an executable from the simple example files presented above that can be used (like&lt;br /&gt;
the PGI executables generated in the previous section) in a SLURM batch submission script.   First, we will produce&lt;br /&gt;
object files (*.o files), and then we will link them into a GPU-Device-ready executable.  Here are the &#039;nvcc&#039;&lt;br /&gt;
commands for generating the object files: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nvcc -c  simple3_host.cu&lt;br /&gt;
nvcc -c  simple3_device.cu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above commands should be familiar to C programmers and will produce 2 object files,  simple3_host.o and&lt;br /&gt;
simple3_device.o in the working directory.   Next, the GPU-Device-ready executable is created with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nvcc -o simple3.exe *.o&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, this should be very familiar to C programmers.  It should be noted that these two steps can be combined&lt;br /&gt;
as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nvcc -o simple3.exe *.cu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
No additional libraries or include files are required for this simple example, but in a more complex case like&lt;br /&gt;
those provided in the CUDA Software Development Kit (SDK),  library paths and libraries might be specified &lt;br /&gt;
using the &#039;-L&#039; and &#039;-l&#039; options, include file paths with the &#039;-I&#039; option, among others.  Again, details are provided&lt;br /&gt;
in the &#039;nvcc&#039; man page or NVIDIA Compiler Driver manual. &lt;br /&gt;
&lt;br /&gt;
We now have an an executable code, &#039;simple3.exe&#039;, that can be submitted with SLURM to one of the GPU-enabled&lt;br /&gt;
compute nodes on PENZIAS and that will create and increment a 2D matrix on the GPU, return the results to&lt;br /&gt;
the CPU, and print them out.&lt;br /&gt;
&lt;br /&gt;
==== A Sample CUDA GPU Parallel Program Written in PGI&#039;s CUDA Fortran ====&lt;br /&gt;
As mentioned, in addition to CUDA C, PGI and NVIDIA have jointly developed a CUDA Fortran programming model&lt;br /&gt;
and CUDA Fortran compiler.  CUDA Fortran has been fully integrated into PGI&#039;s Fortran programming environment.&lt;br /&gt;
The HPC Center&#039;s version of the PGI Fortran compiler fully supports CUDA Fortran.  &lt;br /&gt;
&lt;br /&gt;
Here, the same example presented above in CUDA C has been translated by HPC Center staff into CUDA Fortran.&lt;br /&gt;
The CUDA Fortran host driver or main program that runs on the compute node host is presented first followed by the&lt;br /&gt;
CUDA Fortran device or GPU code.  The CUDA Fortran model proves to be economical and elegant because it can take&lt;br /&gt;
advantage of Fortran&#039;s array-based syntax.  For instance in CUDA Fortran moving data to and from the device does&lt;br /&gt;
NOT require calls to cudaMemcpy() or cudaMemset(), but is accomplished using Fortran&#039;s native array assignment&lt;br /&gt;
capability across a simple assignment &#039;=&#039; sign.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   program simple3()&lt;br /&gt;
!&lt;br /&gt;
   use cudafor&lt;br /&gt;
   use mykernel&lt;br /&gt;
!&lt;br /&gt;
   implicit none&lt;br /&gt;
!&lt;br /&gt;
   integer :: dimx = 16, dimy = 16&lt;br /&gt;
   integer :: row = 1, col = 1&lt;br /&gt;
   integer :: fail = 0&lt;br /&gt;
   integer :: asize = 0&lt;br /&gt;
!&lt;br /&gt;
   integer, allocatable, dimension(:) :: host_a&lt;br /&gt;
   integer, device, allocatable, dimension(:) :: dev_a&lt;br /&gt;
!&lt;br /&gt;
   type(dim3) :: grid, block&lt;br /&gt;
&lt;br /&gt;
   asize = dimx * dimy&lt;br /&gt;
&lt;br /&gt;
   allocate(host_a(asize),dev_a(asize),stat=fail)&lt;br /&gt;
&lt;br /&gt;
   if(fail /= 0) then&lt;br /&gt;
      write(*,&#039;(a)&#039;) &#039;couldn&#039;&#039;t allocate memory&#039;&lt;br /&gt;
      stop&lt;br /&gt;
   end if&lt;br /&gt;
&lt;br /&gt;
   dev_a(:) = 0&lt;br /&gt;
&lt;br /&gt;
   block = dim3(4,4,1)&lt;br /&gt;
   grid  = dim3(dimx/4,dimy/4,1)&lt;br /&gt;
&lt;br /&gt;
   call mykernel&amp;lt;&amp;lt;&amp;lt;grid,block&amp;gt;&amp;gt;&amp;gt;(dev_a,dimx,dimy)&lt;br /&gt;
&lt;br /&gt;
   host_a(:) = dev_a(:)&lt;br /&gt;
&lt;br /&gt;
   do row=1,dimy&lt;br /&gt;
      do col=1,dimx&lt;br /&gt;
         write(*,&#039;(i1)&#039;, advance=&#039;no&#039;) host_a((row-1)*dimx+col)&lt;br /&gt;
      end do&lt;br /&gt;
      write(*,&#039;(/)&#039;, advance=&#039;no&#039;)&lt;br /&gt;
   end do&lt;br /&gt;
&lt;br /&gt;
   deallocate(host_a,dev_a)&lt;br /&gt;
&lt;br /&gt;
   end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the CUDA Fortran device code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module mykernel&lt;br /&gt;
!&lt;br /&gt;
   contains&lt;br /&gt;
!&lt;br /&gt;
   attributes(global) subroutine mykernel(dev_a,dimx,dimy)&lt;br /&gt;
!&lt;br /&gt;
   integer, device, dimension(:) :: dev_a&lt;br /&gt;
   integer, value  :: dimx, dimy&lt;br /&gt;
!&lt;br /&gt;
   integer :: ix, iy&lt;br /&gt;
   integer :: idx&lt;br /&gt;
&lt;br /&gt;
   ix = (blockidx%x-1)*blockdim%x + threadidx%x&lt;br /&gt;
   iy = (blockidx%y-1)*blockdim%y + (threadidx%y-1)&lt;br /&gt;
   idx = iy * dimx + ix&lt;br /&gt;
&lt;br /&gt;
   dev_a(idx) = dev_a(idx) + 1&lt;br /&gt;
&lt;br /&gt;
   end subroutine&lt;br /&gt;
&lt;br /&gt;
end module mykernel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compiling CUDA Fortran code is also simple, requiring nothing more than the default&lt;br /&gt;
PGI compiler.  Here is how the above code would be compiled in to a device-ready&lt;br /&gt;
executable that could be submitted in the same manner as the CUDA C original.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pgf90 -Mcuda -fast -o simple3.exe simple3.CUF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The primary thing to remember is to use the &#039;.CUF&#039; suffix on all CUDA Fortran source&lt;br /&gt;
files.  As mentioned above the basics of CUDA Fortran are presented here &lt;br /&gt;
[http://www.pgroup.com/doc/pgicudaforug.pdf].&lt;br /&gt;
&lt;br /&gt;
=== Submitting CUDA (C or Fortran), GPU-Parallel Programs Using SLURM ===&lt;br /&gt;
The SLURM script for submitting the &#039;simple3.exe&#039; executable generated by the &#039;nvcc&#039; compiler&lt;br /&gt;
driver to ANDY is very similar to the script used for the PGI executable provided above: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N CUDA_GPU_job&lt;br /&gt;
#SLURM -l select=1:ncpus=1:ngpus=1&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
# Point to the CUDA executable to run the job&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin SIMPLE CUDA C or Fortran Run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
./simple3.exe&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   SIMPLE CUDA C or Fortran Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This script is almost the same as the one explained in the article &amp;quot;Submitting Portland Group, GPU-Parallel Programs Using SLURM&amp;quot; above.&lt;br /&gt;
&lt;br /&gt;
These are the essential SLURM script requirements for submitting any GPU-Device-ready executable.  This applies to&lt;br /&gt;
both GPU-ready executable code generated from native CUDA C or Fortran code, and compiler-directives-based&lt;br /&gt;
GPU code. Other variations are possible, including jobs that combine the MPI or OpenMP (or even both of these) and GPU parallel&lt;br /&gt;
programming in a single GPU-SMP-MPI multi-parallel job.  These other options are discussed in the more detailed &lt;br /&gt;
section on SLURM Pro below.  The HPC Center staff has developed a series of sample codes showing all these multi-&lt;br /&gt;
parallel programming model combinations based on a simple Monte Carlo algorithm for calculating the price of an&lt;br /&gt;
option.  To obtain this examples code suite, makefile, and submit scripts please send a request to &#039;&#039;hpchelp@csi.cuny.edu&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
=== Submitting CUDA (C or Fortran), GPU-Parallel Programs and Functions Using MATLAB ===&lt;br /&gt;
Please refer to the details in the subsection on MATLAB GPU computing below within the larger section on using&lt;br /&gt;
MATLAB at the CUNY HPC.&lt;br /&gt;
&lt;br /&gt;
== [[CoArray Fortran and Unified Parallel C (PGAS) Program Compilation and SLURM Job Submission]] ==&lt;br /&gt;
As part of its plan to offer CUNY HPC Center users a unique variety of HPC parallel programming alternatives&lt;br /&gt;
(beyond even those described above), the HPC Center support a two cabinet 2816 core Cray XE6m system&lt;br /&gt;
called SALK. This system supports two newer and similar, language-integrated and highly scalable approaches&lt;br /&gt;
to parallel programming, CoArray Fortran (CAF) and Unified Parallel C (UPC).  Both are extensions of their parent&lt;br /&gt;
languages, Fortran and C respectively, and offer a symbolically concise alternative to the &#039;&#039;de facto&#039;&#039; standard,&lt;br /&gt;
message-passing model, MPI.  CAF and UPC are so-called Partitioned Global Address Space (PGAS) parallel &lt;br /&gt;
programming models.  Unlike MPI, CAF and UPC are not based on a subroutine library call API.&lt;br /&gt;
&lt;br /&gt;
Both MPI and the PGAS approach to parallel programming rely on a Single Program Multiple Data (SPMD)&lt;br /&gt;
model.  In the SPMD parallel programming model, identical collaborating programs (with fully separate memory&lt;br /&gt;
spaces, or program images) are executed by different processors that may or may not be separated by a network.&lt;br /&gt;
Each processor-program produces different parts of the result in parallel by working on different data and taking&lt;br /&gt;
conditionally different paths through the same code. The PGAS approach differs from MPI in that it abstracts away&lt;br /&gt;
as much as possible, reducing the way that communication is expressed to minimal built-in extensions to the base&lt;br /&gt;
language, in our case C and Fortran.  In large part, CAF and UPC are free of extension-related, explicit library calls.&lt;br /&gt;
With the underlying communication layer abstracted away, PGAS languages &#039;&#039;appear&#039;&#039; to provide a singular, global&lt;br /&gt;
memory space spanning its processes.&lt;br /&gt;
&lt;br /&gt;
In addition, communication among processes in a PGAS program is &#039;&#039;one-sided&#039;&#039; in the sense that any process&lt;br /&gt;
can read and/or write into the memory of any other process without informing it of its actions.  Such one-sided&lt;br /&gt;
communication has the advantage of being economical, lowering the latency (first byte delay) that is part of the cost&lt;br /&gt;
of communication among different parallel processes.  Lower latency parallel programs are generally more scalable&lt;br /&gt;
because they waste less time in communication, especially when the data to be moved are small in size, in finer-grained&lt;br /&gt;
communication patterns.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039; An Example CoArray Fortran (CAF) Code &#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039; Submitting CoArray Fortran Parallel Programs Using SLURM &#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039; An Example Unified Parallel C (UPC) Code &#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039; Submitting UPC Parallel Programs Using SLURM &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Summarizing, PGAS languages such as CAF and UPC offer the following &#039;&#039;potential&#039;&#039; advantages over MPI:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. Explicit communication is abstracted out of the PGAS programming model.&lt;br /&gt;
&lt;br /&gt;
2. Process memory is logically unified into a global address space.&lt;br /&gt;
&lt;br /&gt;
3. Parallel work is economically expressed through simple extensions&lt;br /&gt;
    to a base language, rather than through a library-call-based API.&lt;br /&gt;
&lt;br /&gt;
4. Parallel coding is easier and more intuitive.&lt;br /&gt;
&lt;br /&gt;
5. Performance and scalability are better because communication latency is lower.&lt;br /&gt;
&lt;br /&gt;
6. Implementation of fine-grained communication patterns is faster, easier.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The primary drawbacks of PGAS programming models include much less wide-spread support than&lt;br /&gt;
MPI on common case HPC system architectures such as traditional HPC clusters, and the need for special&lt;br /&gt;
hardware support to get best-case performance out of the PGAS model.  Here at the CUNY HPC Center,&lt;br /&gt;
the Cray XE6m system, SALK, has a custom interconnect (Gemini) that supports both UPC and CAF.  These&lt;br /&gt;
PGAS languages can be run on standard clusters, but the performance is not typically as good. The HPC Center&lt;br /&gt;
supports Berkeley UPC and Intel CAF on top standard cluster interconnects without the advantage of PGAS&lt;br /&gt;
hardware support.&lt;br /&gt;
&lt;br /&gt;
=== An Example CoArray Fortran (CAF) Code ===&lt;br /&gt;
The following simple example program includes some of the essential features of the CoArray Fortran (CAF) &lt;br /&gt;
programming model, including multiple processor, image-spanning co-array variable declaration; one-sided&lt;br /&gt;
data transfer between CAF&#039;s memory-space-distinct images via simple assignment statements; and the use of&lt;br /&gt;
critical regions and synchronization barriers.  No attempt is made here to tutor the reader in all of the features&lt;br /&gt;
of the CAF; rather, the goal is to give the reader a feel for the CAF extensions adopted in the Fortran 2008&lt;br /&gt;
programming language standard that now includes CoArrays.   This example, which computes PI by numerical&lt;br /&gt;
integration, can be cut and pasted into a file and run on SALK.&lt;br /&gt;
&lt;br /&gt;
A tutorial on the CAF parallel programming model can be found here [http://www2.hpcl.gwu.edu/pgas09/tutorials/caf_tut.pdf],&lt;br /&gt;
a more formal description of the language specifications here [http://caf.rice.edu/documentation/John-Reid-N1824-2010-04-21.pdf],&lt;br /&gt;
and the actual CAF standard document as defined and adopted by the Fortran standard&#039;s committee&lt;br /&gt;
for Fortran 2008 here [http://caf.rice.edu/documentation/Fortran-2008-Draft-2010-04-20.pdf]. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
! &lt;br /&gt;
!  Computing PI by Numerical Integration in CAF&lt;br /&gt;
!&lt;br /&gt;
&lt;br /&gt;
program int_pi()&lt;br /&gt;
!&lt;br /&gt;
implicit none&lt;br /&gt;
!&lt;br /&gt;
integer :: start, end&lt;br /&gt;
integer :: my_image, tot_images&lt;br /&gt;
integer :: i = 0, rem = 0, mseg = 0, nseg = 0&lt;br /&gt;
!&lt;br /&gt;
real :: f, x&lt;br /&gt;
!&lt;br /&gt;
&lt;br /&gt;
! Declare two CAF scalar CoArrays, each with one copy per image&lt;br /&gt;
&lt;br /&gt;
real :: local_pi[*], global_pi[*]&lt;br /&gt;
&lt;br /&gt;
! Define integrand with Fortran statement function, set result&lt;br /&gt;
! accuracy through the number of segments&lt;br /&gt;
&lt;br /&gt;
f(x) = 1.0/(1.0+x*x)&lt;br /&gt;
nseg = 4096&lt;br /&gt;
&lt;br /&gt;
! Find out my image name and the total number of images&lt;br /&gt;
&lt;br /&gt;
my_image   = this_image()&lt;br /&gt;
tot_images = num_images()&lt;br /&gt;
&lt;br /&gt;
! Each image initializes its part of the CoArrays to zero&lt;br /&gt;
&lt;br /&gt;
local_pi  = 0.0&lt;br /&gt;
global_pi = 0.0&lt;br /&gt;
&lt;br /&gt;
! Partition integrand segments across CAF images (processors)&lt;br /&gt;
&lt;br /&gt;
rem = mod(nseg,tot_images)&lt;br /&gt;
&lt;br /&gt;
mseg  = nseg / tot_images&lt;br /&gt;
start = mseg * (my_image - 1)&lt;br /&gt;
end   = (mseg * my_image) - 1&lt;br /&gt;
&lt;br /&gt;
if ( my_image .eq. tot_images ) end = end + rem&lt;br /&gt;
&lt;br /&gt;
! Compute local partial sums on each CAF image (processor)&lt;br /&gt;
&lt;br /&gt;
do i = start,end&lt;br /&gt;
  local_pi = local_pi + f((.5 + i)/(nseg))&lt;br /&gt;
&lt;br /&gt;
! The above is equivalent to the following more explicit code:&lt;br /&gt;
!&lt;br /&gt;
! local_pi[my_image]= local_pi[my_image] + f((.5 + i)/(nseg))&lt;br /&gt;
!&lt;br /&gt;
&lt;br /&gt;
enddo&lt;br /&gt;
&lt;br /&gt;
local_pi = local_pi * 4.0 / nseg&lt;br /&gt;
&lt;br /&gt;
! Add local, partial sums to single global sum on image 1 only. Use&lt;br /&gt;
! critical region to prevent read-before-write race conditions. In such&lt;br /&gt;
! a region, only one image at a time may pass.&lt;br /&gt;
&lt;br /&gt;
critical&lt;br /&gt;
 global_pi[1] = global_pi[1] + local_pi&lt;br /&gt;
end critical&lt;br /&gt;
&lt;br /&gt;
! Ensure all partial sums have been added using CAF &#039;sync all&#039; barrier&lt;br /&gt;
! construct before writing out results&lt;br /&gt;
&lt;br /&gt;
sync all&lt;br /&gt;
&lt;br /&gt;
! Only CAF image 1 prints the global result&lt;br /&gt;
&lt;br /&gt;
if( this_image() == 1) write(*,&amp;quot;(&#039;PI = &#039;, f10.6)&amp;quot;) global_pi&lt;br /&gt;
&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This sample code computes PI in parallel using a numerical integration scheme. Taking its key CAF-specific features in order, first&lt;br /&gt;
we find the declaration of two simple scalar co-arrays (local_pi and global_pi) using CAF&#039;s square-bracket&lt;br /&gt;
notation for the co-array, (e.g. &#039;&#039;sname[*]&#039;&#039;, &#039;&#039;vname(1:100)[*]&#039;&#039;, or &#039;&#039;vname(1:8,1:4)[1:4,*]&#039;&#039;).  The square bracket notation follows the&lt;br /&gt;
standard Fortran array notation rules, except that the last dimension is always indicated with a asterisk (&#039;&#039;&#039;*&#039;&#039;&#039;) that is expanded to&lt;br /&gt;
ensure that the number of co-arrays co-dimensioned is equal to the number of images (processes) the application has launched.  &lt;br /&gt;
&lt;br /&gt;
Next,  the example uses the &#039;&#039;this_image()&#039;&#039; and &#039;&#039;num_images()&#039;&#039; intrinsic functions to determine each image&#039;s image ID (a number&lt;br /&gt;
from 1 to the number of processors requested) and the total number of images or processes requested by the job.  These functions&lt;br /&gt;
return values are stored in typical, image-local, Fortran integer variables and are used later in the example to partition the work&lt;br /&gt;
among the processors and define image-specific paths through the code.  After the integral segments are partitioned among the&lt;br /&gt;
CoArray images or processes (using the &#039;&#039;start&#039;&#039; and &#039;&#039;end&#039;&#039; variables), each image computes its piece of the integral in what is a&lt;br /&gt;
a standard Fortran &#039;&#039;do loop&#039;&#039;.  However, the variable &#039;&#039;local_pi&#039;&#039;, as noted above, is a co-array.  Two notations, one implicit and one&lt;br /&gt;
explicit (but commented out) are presented.  The implicit code, with it square-bracket notation dropped, is allowed (and encouraged&lt;br /&gt;
for optimization reasons) when only the image-local part of a co-array is referenced by a given image.  The explicit code makes it&lt;br /&gt;
clear through the square-bracket extension &#039;&#039;[my_image]&#039;&#039; that each image is working with a local element of the &#039;&#039;local_p&#039;&#039;i co-array.&lt;br /&gt;
When the practice of dropping the []s is adopted as a notational covention, all remote, co-array references (which are more time &lt;br /&gt;
consuming operations) in are immediately, visually identifiable by square-bracket suffixes in present the code.  Optimal coding&lt;br /&gt;
practice should seek to minimize the use of square-bracketed references where possible.&lt;br /&gt;
&lt;br /&gt;
With the local, partial sums computed by each image and placed in their piece of the &#039;&#039;local_pi[*]&#039;&#039; co-array, a global sum is then&lt;br /&gt;
safely computed and written out only on image 1 with the help of a CAF critical region.  Within a critical region, only one image (process)&lt;br /&gt;
may pass at a time.  This ensures that &#039;&#039;global_pi[1]&#039;&#039; is accurately summed from each &#039;&#039;local_pi[my_image]&#039;&#039; avoiding mistakes that&lt;br /&gt;
might be caused by simultaneous reads of the same still partially summed &#039;&#039;global_pi[1]&#039;&#039; before each image-specific increments&lt;br /&gt;
were written.  Here, we see the variable &#039;&#039;global_pi[1]&#039;&#039; with the square-bracket notation which is a reminder that each image (process)&lt;br /&gt;
is writing its result into the memory space on image 1.  This is a remote write for all images, except image 1.  &lt;br /&gt;
&lt;br /&gt;
The last section of the code synchronizes (&#039;&#039;sync all&#039;&#039;) the images to ensure all partial sums have been added, and then has&lt;br /&gt;
image 1 write out the global result.  Note that, as writtenhere, only image 1 has the global result.  For a more detailed treatment of&lt;br /&gt;
the CoArray Fortran language extension, now part of the Fortran 2008 standard, please see the web references included above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CoArray Fortran on both its Cray XE6 system, SALK, (which has custom hardware and software support&lt;br /&gt;
for the UPC and CAF PGAS languages) and on its other systems where the Intel Cluster Studio provides a beta-level implementation&lt;br /&gt;
of CoArray Fortran layered on top of Intel&#039;s MPI library, an approach that offers CAF&#039;s coding simplicity, but no performance advantage&lt;br /&gt;
over MPI.&lt;br /&gt;
&lt;br /&gt;
Here, the process of compiling a CAF program both for Cray&#039;s CAF on SALK, and for Intel&#039;s CAF on the HPC Center&#039;s other systems is&lt;br /&gt;
described.  On the Cray, compiling a CAF program, such as the example above, simply requires adding an option to the Cray Fortran&lt;br /&gt;
compiler, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk:&lt;br /&gt;
salk: module load PrgEnv-cray&lt;br /&gt;
salk:&lt;br /&gt;
salk: ftn -h caf -o int_PI.exe int_PI.f90&lt;br /&gt;
salk:&lt;br /&gt;
salk: ls&lt;br /&gt;
int_PI.exe&lt;br /&gt;
salk:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the sequence above, first the Cray programming environment is loaded using the &#039;module&#039; command; then&lt;br /&gt;
the Cray Fortran compiler is invoked with the &#039;&#039;-h caf&#039;&#039; option to include the CAF features of the Fortran compiler.&lt;br /&gt;
The result is a CAF-enabled executable that can be run with Cray&#039;s parallel job initiation command &#039;aprun&#039;.   This&lt;br /&gt;
compilation was done in dynamic mode so that any number of processors (CAF images) can be selected at run time&lt;br /&gt;
using the &#039;&#039;-n ##&#039;&#039; option to Cray&#039;s &#039;aprun&#039; command. The required form of the &#039;aprun&#039; command is shown below&lt;br /&gt;
in the section on CAF program job submission using SLURM on the Cray.  &lt;br /&gt;
&lt;br /&gt;
To compile for a fixed number of processors (a static compile) or CAF images use the &#039;&#039;-X ##&#039;&#039; option on the Cray,&lt;br /&gt;
as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk:&lt;br /&gt;
salk: ftn -X 32 -h caf -o int_PI_32.exe int_PI.f90&lt;br /&gt;
salk:&lt;br /&gt;
salk: ls&lt;br /&gt;
int_PI_32.exe&lt;br /&gt;
salk:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example, the PI example program has been compiled for 32 processors or CAF images,&lt;br /&gt;
and therefore &#039;&#039;must&#039;&#039; be invoked with that many processors on the &#039;aprun&#039; command line:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aprun -n 32 -N 16 ./int_PI_32.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On the HPC Center&#039;s other systems, compilation is conceptually similar, but uses the Intel Fortran&lt;br /&gt;
compiler &#039;ifort&#039; and requires a CAF configuration file to be defined by the user.  Here is a typical configuration&lt;br /&gt;
file to compile statically for 16 CAF images followed by the compilation command.  This compilation&lt;br /&gt;
requests a distributed mode compilation in which distinct CAF images are not expected to be on the&lt;br /&gt;
same physical node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
andy$cat cafconf.txt&lt;br /&gt;
-rr -envall -n 16 ./int_PI.exe&lt;br /&gt;
andy$&lt;br /&gt;
andy$ifort -o int_PI.exe -coarray=distributed -coarray-config-file=cafconf.txt int_PI.f90&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Intel CAF compiler is relatively new and has had limited testing on CUNY HPC systems. It also makes&lt;br /&gt;
use of Intel&#039;s MPI rather than the CUNY HPC Center default, OpenMPI, which means that Intel CAF jobs will&lt;br /&gt;
not be properly account for.  As such,  we recommend that Intel CAF compiler be used for development and&lt;br /&gt;
testing only, while production CAF codes be run on SALK using Cray&#039;s CAF compiler.  An upgrade is planned for&lt;br /&gt;
the Intel Compiler Suite in the near future, and this should improve the performance and functionality of &lt;br /&gt;
Intel&#039;s CAF compiler release.   Additional documentation on using Intel CoArray Fortran is available here.&lt;br /&gt;
&lt;br /&gt;
=== Submitting CoArray Fortran Parallel Programs Using SLURM ===&lt;br /&gt;
Finally, two SLURM scripts that will run the above CAF executable.  First, one for the Cray XE6 system, SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N CAF_example&lt;br /&gt;
#SLURM -l select=64:ncpus=1:mem=2000mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -o int_PI.out&lt;br /&gt;
#SLURM -e int_PI.err&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
aprun -n 64 -N 16 ./int_PI.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Above, the dynamically compiled executable is run on 64 SALK, Cray XE6 cores (-n 64) with 16 cores&lt;br /&gt;
packed to a physical node (-N 16).  More detail is presented below on SLURM job submission to the Cray&lt;br /&gt;
and on the use of of the Cray&#039;s &#039;aprun&#039; command.  On the Cray, &#039;man aprun&#039; provides an important and&lt;br /&gt;
detailed account of the &#039;aprun&#039; command-line options and their function. One cannot fully understand&lt;br /&gt;
job control and submission on the Cray (SALK) without understanding the &#039;aprun&#039; command.&lt;br /&gt;
&lt;br /&gt;
A SLURM script for the example code compiled dynamically (or statically) for 16 processors with the Intel&lt;br /&gt;
compiler (ifort) for execution on one of the HPC Center&#039;s more traditional HPC clusters looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N CAF_example&lt;br /&gt;
#SLURM -l select=16:ncpus=1:mem=1920mb&lt;br /&gt;
#SLURM -l place=scatter&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo -n &amp;quot;The primary compute node hostname is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo -n &amp;quot;The location of the SLURM nodefile is: &amp;quot;&lt;br /&gt;
echo $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;The contents of the SLURM nodefile are: &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
cat  $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
NCNT=`uniq $SLURM_NODEFILE | wc -l - | cut -d &#039; &#039; -f 1`&lt;br /&gt;
echo -n &amp;quot;The node count determined from the nodefile is: &amp;quot;&lt;br /&gt;
echo $NCNT&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Change to working directory&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;You are using the following &#039;mpiexec&#039; and &#039;mpdboot&#039; commannds: &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
type mpiexec&lt;br /&gt;
type mpdboot&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting the Intel &#039;mpdboot&#039; daemon on $NCNT nodes ... &amp;quot;&lt;br /&gt;
mpdboot -n $NCNT --verbose --file=$SLURM_NODEFILE -r ssh&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
mpdtrace&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting an Intel CAF job requesting 16 cores ... &amp;quot;&lt;br /&gt;
&lt;br /&gt;
./int_PI.exe&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;CAF job finished ... &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Making sure all mpd daemons are killed ... &amp;quot;&lt;br /&gt;
mpdallexit&lt;br /&gt;
echo &amp;quot;SLURM CAF script finished ... &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here, the SLURM script requests 16 processors (CAF images).  It simply names the executable&lt;br /&gt;
itself to setup the Intel CAF runtime environment, engage the 16 processors, and initiate&lt;br /&gt;
execution.  This script is more elaborate because it include the procedure for setting up &lt;br /&gt;
and breaking down the Intel MPI environment on the nodes that SLURM has selected to run&lt;br /&gt;
the job.&lt;br /&gt;
&lt;br /&gt;
=== An Example Unified Parallel C (UPC) Code ===&lt;br /&gt;
The following simple example program includes the essential features of the Unified Parallel C (UPC) &lt;br /&gt;
programming model, including shared (globally distributed) variable declaration and blocking, one-&lt;br /&gt;
sided data transfer between UPC&#039;s memory-space distinct threads via simple assignment statements, and&lt;br /&gt;
synchronization barriers.  No attempt is made here to tutor the reader in all of the features of the UPC;&lt;br /&gt;
rather the goal is to give the reader a feel for basic UPC extensions to the C programming language. &lt;br /&gt;
A tutorial on the UPC programming model can be found here [http://upc.gwu.edu/tutorials/UPC-SC05.pdf],&lt;br /&gt;
a user guide here [http://upc.gwu.edu/downloads/Manual-1.2.pdf], and a more formal description of&lt;br /&gt;
the language specifications here [http://upc.lbl.gov/docs/user/upc_spec_1.2.pdf].  Cray also has its own &lt;br /&gt;
documentation on UPC [http://docs.cray.com/books/S-2179-50/html-S-2179-50/z1035483822pvl.html]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
// &lt;br /&gt;
//  Computing PI by Numerical Integration in UPC&lt;br /&gt;
//&lt;br /&gt;
&lt;br /&gt;
// Select memory consistency model (default).&lt;br /&gt;
&lt;br /&gt;
#include&amp;lt;upc_relaxed.h&amp;gt; &lt;br /&gt;
&lt;br /&gt;
#include&amp;lt;math.h&amp;gt;&lt;br /&gt;
#include&amp;lt;stdio.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// Define integrand with a macro and set result accuracy&lt;br /&gt;
&lt;br /&gt;
#define f(x) (1.0/(1.0+x*x))&lt;br /&gt;
#define N 4096&lt;br /&gt;
&lt;br /&gt;
// Declare UPC shared scalar, shared vector array, and UPC lock variable.&lt;br /&gt;
&lt;br /&gt;
shared float global_pi = 0.0;&lt;br /&gt;
shared [1] float local_pi[THREADS];&lt;br /&gt;
upc_lock_t *lock;&lt;br /&gt;
&lt;br /&gt;
void main(void)&lt;br /&gt;
{&lt;br /&gt;
   int i;&lt;br /&gt;
&lt;br /&gt;
   // Allocate a single, globally-shared UPC lock. This &lt;br /&gt;
   // function is collective, intial state is unlocked.&lt;br /&gt;
&lt;br /&gt;
   lock = upc_all_lock_alloc();&lt;br /&gt;
&lt;br /&gt;
   // Each UPC thread initializes its local piece of the&lt;br /&gt;
   // shared array.&lt;br /&gt;
&lt;br /&gt;
   local_pi[MYTHREAD] = 0.0;&lt;br /&gt;
&lt;br /&gt;
   // Distribute work across threads using local part of shared&lt;br /&gt;
   // array &#039;local_pi&#039; to compute PI partial sum on thread (processor)&lt;br /&gt;
&lt;br /&gt;
   for(i = 0; i &amp;lt;  N; i++) {&lt;br /&gt;
       if(MYTHREAD == i%THREADS) local_pi[MYTHREAD] += (float) f((.5 + i)/(N));&lt;br /&gt;
   } &lt;br /&gt;
&lt;br /&gt;
   local_pi[MYTHREAD] *= (float) (4.0 / N);&lt;br /&gt;
&lt;br /&gt;
   // Compile local, partial sums to single global sum.&lt;br /&gt;
   // Use locks to prevent read-before-write race conditions.&lt;br /&gt;
&lt;br /&gt;
   upc_lock(lock);&lt;br /&gt;
   global_pi += local_pi[MYTHREAD];&lt;br /&gt;
   upc_unlock(lock);&lt;br /&gt;
&lt;br /&gt;
   // Ensure all partial sums have been added with UPC barrier.&lt;br /&gt;
&lt;br /&gt;
   upc_barrier;&lt;br /&gt;
&lt;br /&gt;
   // UPC thread 0 prints the results and frees the lock.&lt;br /&gt;
&lt;br /&gt;
   if(MYTHREAD==0) printf(&amp;quot;PI = %f\n&amp;quot;,global_pi);&lt;br /&gt;
   if(MYTHREAD==0) upc_lock_free(lock);&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This sample code computes PI in parallel using a numerical integration scheme.&lt;br /&gt;
Taking the key UPC-specific features present in this example in order, first we find the&lt;br /&gt;
declaration of the memory consistency model to be used in this code.  The default choice&lt;br /&gt;
is &#039;&#039;relaxed&#039;&#039; which is selected explicitly here.  The relaxed choice places the burden of &lt;br /&gt;
ensuring that shared memory operations in the code that are dependent and must be&lt;br /&gt;
order on the programmer through the use of barriers, fences, and locks.  This code&lt;br /&gt;
includes explicit locks and barriers to ensure memory operations are complete and &lt;br /&gt;
processor have been synchronized.&lt;br /&gt;
&lt;br /&gt;
Next, three declarations outside the main body of the application demonstrate the&lt;br /&gt;
use of UPC&#039;s &#039;&#039;shared&#039;&#039; type.  First, a scalar shared variable &#039;&#039;global_pi&#039;&#039; is declared.&lt;br /&gt;
This variable can be read from and written to by any of the UPC threads (processors)&lt;br /&gt;
allocated by the runtime environment to the application it is executed. It will hold&lt;br /&gt;
the final result of the calculation of PI in this example.  Shared scalar variables are&lt;br /&gt;
singular and always reside in the shared memory of THREAD 0 in UPC.&lt;br /&gt;
&lt;br /&gt;
Next, a shared one dimensional array &#039;&#039;local_pi&#039;&#039; with a block size of one (1) and a size&lt;br /&gt;
of THREADS is declared. The THREADS macro is always set to the number of processors&lt;br /&gt;
(UPC threads) requested by the job at runtime. All elements in this shared array are accessible&lt;br /&gt;
by all THREADS allocated to the job. The block size of one means that array elements are&lt;br /&gt;
distributed, one-per-thread, across the logically Partitioned Global Address Space (PGAS)&lt;br /&gt;
of this parallel application. One is the default block size for shared arrays, but other&lt;br /&gt;
sizes are possible.&lt;br /&gt;
&lt;br /&gt;
Finally, a pointer to a special shared scalar variable to be used as a lock is declared.&lt;br /&gt;
Because UPC defines both a shared and private memory spaces for each program image&lt;br /&gt;
or THREAD, it must support four classes of pointers:  private pointers to private, &lt;br /&gt;
private pointers to shared, shared pointers to private, and shared pointers to shared.&lt;br /&gt;
The pointer declared here is a shared pointer to shared which makes the lock&#039;s memory&lt;br /&gt;
location available to all threads.  In the body of the code, the lock&#039;s memory is allocated&lt;br /&gt;
and placed in the unlocked state with the call to &#039;&#039;upc_all_lock_alloc()&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Next, each thread initializes its piece of the shared array &#039;&#039;local_pi&#039;&#039; to zero with the&lt;br /&gt;
help of the MYTHREAD macro, which contains the thread identifier of the particular&lt;br /&gt;
thread that does the assignment.  In this case, each UPC thread initializes only the &lt;br /&gt;
part of the share array the is in its portion of shared PGAS memory.  The standard C&lt;br /&gt;
for-loop that follows divides the work of integration among the different UPC threads&lt;br /&gt;
so that each thread works on its only local portion of the shared array &#039;&#039;local_pi&#039;&#039;.  UPC&lt;br /&gt;
provides a work-sharing loop construct &#039;&#039;upc_forall&#039;&#039; that accomplishes the same&lt;br /&gt;
thing implicitly. &lt;br /&gt;
&lt;br /&gt;
Processor-local (UPC thread) partial sums are then summed globally and in a memory&lt;br /&gt;
consistent fashion with the help of the UPC lock function &#039;&#039;upc_lock()&#039;&#039; and &#039;&#039;upc_unlock()&#039;&#039;.&lt;br /&gt;
Without the explicit locking code here, there would be nothing to prevent two UPC&lt;br /&gt;
threads from reading the most current value in memory before it had been updated &lt;br /&gt;
with a latest partial sum.  This would produce an incorrect under-summing of the&lt;br /&gt;
result.  Next, a &#039;&#039;upc_barrier&#039;&#039; ensures all the summing is completed before the result&lt;br /&gt;
is printed and the lock&#039;s memory is freed.  &lt;br /&gt;
&lt;br /&gt;
This example includes some of the more important UPC PGAS-parallel extensions to&lt;br /&gt;
the C programming language, but a complete review of the UPC parallel extension to&lt;br /&gt;
C is provide in the web documentation referenced above.&lt;br /&gt;
&lt;br /&gt;
As suggested above, the CUNY HPC Center supports UPC on both its Cray XE6 system,&lt;br /&gt;
SALK, (which has custom hardware and software support for the UPC and CAF PGAS&lt;br /&gt;
languages) and on its other systems where Berkeley UPC is installed and uses the&lt;br /&gt;
GASNET library to support the PGAS memory abstraction on top of a number standard&lt;br /&gt;
underlying cluster interconnects.  At the HPC Center this would include Ethernet and/or&lt;br /&gt;
InfiniBand depending on the CUNY HPC Center cluster system being used.&lt;br /&gt;
&lt;br /&gt;
Here, the process of compiling a UPC program both for Cray&#039;s UPC on SALK, and for&lt;br /&gt;
Berkeley UPC on the HPC Center&#039;s other systems is described.  On the Cray, compiling a&lt;br /&gt;
UPC program, such as the example above, simply requires adding an option to the Cray&lt;br /&gt;
C compiler, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk:&lt;br /&gt;
salk: module load PrgEnv-cray&lt;br /&gt;
salk:&lt;br /&gt;
salk: cc -h upc -o int_PI.exe int_PI.c&lt;br /&gt;
salk:&lt;br /&gt;
salk: ls&lt;br /&gt;
int_PI.exe&lt;br /&gt;
salk:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
First, the Cray programming environment is loaded using the &#039;module&#039; command; then&lt;br /&gt;
the Cray compiler is invoked with the &#039;&#039;-h upc&#039;&#039; option to include the UPC elements of the&lt;br /&gt;
compiler.  The result is an executable that can be run with Cray&#039;s parallel job initiation &lt;br /&gt;
command &#039;aprun&#039;.   This compilation was done in dynamic mode so that any number of&lt;br /&gt;
processors (UPC threads) can be selected at run time using the &#039;&#039;-n ##&#039;&#039; option to &#039;aprun&#039;. &lt;br /&gt;
The required form the &#039;aprun&#039; line is shown below in the section on UPC program SLURM&lt;br /&gt;
job submission.  &lt;br /&gt;
&lt;br /&gt;
To compile for a fixed number of processors (a static compile) or UPC threads use&lt;br /&gt;
the &#039;&#039;-X ##&#039;&#039; option on the Cray, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk:&lt;br /&gt;
salk: cc -X 32 -h upc -o int_PI_32.exe int_PI.c&lt;br /&gt;
salk:&lt;br /&gt;
salk: ls&lt;br /&gt;
int_PI_32.exe&lt;br /&gt;
salk:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example, the PI example program has been compiled for 32 processors or UPC threads,&lt;br /&gt;
and therefore must be invoked with that many processors on the &#039;aprun&#039; command line:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aprun -n 32 -N 16 ./int_PI_32.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On the HPC Center&#039;s other systems, compilation is conceptually similar, but uses the Berkeley&lt;br /&gt;
UPC compiler driver &#039;upcc&#039;.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
andy:&lt;br /&gt;
andy: upcc  -o int_PI.exe int_PI.c&lt;br /&gt;
andy:&lt;br /&gt;
andy: ls&lt;br /&gt;
int_PI.exe&lt;br /&gt;
andy:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, the &#039;upcc&#039; compiler driver from Berkeley allows for static compilations using&lt;br /&gt;
its &#039;&#039;-T ##&#039;&#039; option:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
andy:&lt;br /&gt;
andy: upcc -T 32  -o int_PI_32.exe int_PI.c&lt;br /&gt;
andy:&lt;br /&gt;
andy: ls&lt;br /&gt;
int_PI_32.exe&lt;br /&gt;
andy:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Berkeley UPC compiler driver has a number of other useful options that are described&lt;br /&gt;
in its &#039;man&#039; page.  In particular, the &#039;&#039;-network=&#039;&#039; option will target the executable for the&lt;br /&gt;
GASNET communication &#039;&#039;conduit&#039;&#039; of the user&#039;s choosing on systems that have multiple&lt;br /&gt;
interconnects (Ethernet and InfiniBand, for instance) or target the default version of MPI&lt;br /&gt;
as the communication layer.  Type &#039;man upcc&#039; for details. &lt;br /&gt;
&lt;br /&gt;
In general, users can expect better performance from Cray&#039;s UPC compiler on SALK, but&lt;br /&gt;
having UPC on the HPC Center&#039;s traditional cluster architectures provides another location&lt;br /&gt;
for development and supports the wider use of UPC and an alternative to MPI.  In theory,&lt;br /&gt;
well-written UPC code should perform as well as MPI on a standard cluster, while reducing&lt;br /&gt;
the number of lines of code to achieve that performance.  In practice, this is still not always&lt;br /&gt;
the case; more development and hardware support is still needed to get the best performance&lt;br /&gt;
from PGAS languages on commodity cluster environments.&lt;br /&gt;
&lt;br /&gt;
=== Submitting UPC Parallel Programs Using SLURM ===&lt;br /&gt;
&lt;br /&gt;
Finally, two SLURM scripts that will run the above UPC executable.  First, one for the Cray XE6&lt;br /&gt;
system, SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N UPC_example&lt;br /&gt;
#SLURM -l select=64:ncpus=1:mem=2000mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -o int_PI.out&lt;br /&gt;
#SLURM -e int_PI.err&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
aprun -n 64 -N 16 ./int_PI2.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the dynamically compiled executable is run on 64 Cray XE6 cores (-n 64), 16 cores&lt;br /&gt;
packed to a physical node (-N 16).  More detail is presented below on SLURM job submission on&lt;br /&gt;
the Cray and the use of of the Cray&#039;s &#039;aprun&#039; command.  On the Cray &#039;man aprun&#039; provides&lt;br /&gt;
an important and detailed account of the &#039;aprun&#039; command-line options and their function.&lt;br /&gt;
One cannot fully understand job control on the Cray (SALK) without understanding &#039;aprun&#039;.&lt;br /&gt;
&lt;br /&gt;
A similar SLURM script for the example code compiled dynamically (or statically) for 32 processors&lt;br /&gt;
with the Berkeley UPC compiler (upcc) for execution on one of the HPC Center&#039;s more traditional&lt;br /&gt;
HPC cluster looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N UPC_example&lt;br /&gt;
#SLURM -l select=32:ncpus=1:mem=1920mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -o int_PI.out&lt;br /&gt;
#SLURM -e int_PI.err&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
upcrun -n 32 ./int_PI2.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here, the SLURM script requests 32 processors (UPC threads).  It uses the &#039;upcrun&#039; command to&lt;br /&gt;
setup the Berkeley UPC runtime environment, engage the 32 processors, and initiate execution.&lt;br /&gt;
Please type &#039;man upcrun&#039; for details on the &#039;upcrun&#039; command and its options.&lt;br /&gt;
&lt;br /&gt;
== [[Available Mathematical Libraries]] ==&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039; FFTW Scientific Library &#039;&#039;&#039; &lt;br /&gt;
*&#039;&#039;&#039; GNU Scientific Library &#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039; MKL &#039;&#039;&#039; &lt;br /&gt;
*&#039;&#039;&#039; IMSL &#039;&#039;&#039;&lt;br /&gt;
** Fortran Example&lt;br /&gt;
** C Example&lt;br /&gt;
&lt;br /&gt;
=== FFTW Scientific Library === &lt;br /&gt;
FFTW is a C subroutine library for computing the Discrete Fourier Transform (DFT) in one or&lt;br /&gt;
more dimensions, of arbitrary input size, and of both real and complex data (as well as of&lt;br /&gt;
even/odd data, i.e. the discrete cosine/sine transforms or DCT/DST).&lt;br /&gt;
&lt;br /&gt;
The library is described in detail at the FFTW home page at http://www.fftw.org.  The CUNY&lt;br /&gt;
HPC Center has installed FFTW versions 2.1.5 (older), 3.2.2 (default), and 3.3.0 (recent release)&lt;br /&gt;
on ANDY.  All versions were built in both 32-bit and 64-bit floating point formats using the&lt;br /&gt;
latest Intel 12.0 release of their compilers.  In addition, version&#039;s 2.1.5 and 3.3.0 support a MPI&lt;br /&gt;
parallel version of the library.  The default version at the CUNY HPC Center is version 3.2.2 (64-bit)&lt;br /&gt;
located in /share/apps/fftw/default/*.&lt;br /&gt;
&lt;br /&gt;
The reason for the extra versions is that over the course of FFTW&#039;s development some changes were&lt;br /&gt;
made to the API for the MPI parallel library.  Version 2.1.5 supports the older MPI-parallel API&lt;br /&gt;
and the recently released version 3.3.0 supports a newer MPI-parallel API.  NOTE: The default&lt;br /&gt;
version does NOT include an MPI-parallel verstion, which skipped this version generation. A&lt;br /&gt;
threads version of each library was also built.&lt;br /&gt;
&lt;br /&gt;
Please refer to the on-line documentation at the FFTW website for details on using the library&lt;br /&gt;
(whatever the version).  With the calls properly included in your code you can link in the default&lt;br /&gt;
at compile and link time with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc -o my_fftw.exe my_fftw.c -L/share/apps/fftw/default/lib -lfftw3 &lt;br /&gt;
&lt;br /&gt;
(pgcc or gcc would be used in the same way)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the non-default versions substitute the version directory for the string &#039;default&#039; above. For&lt;br /&gt;
example, for the new 3.3 release in 32-bit use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc -o my_fftw.exe my_fftw.c -L/share/apps/fftw/3.3_32bit/lib -lfftw3f&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For an MPI-parallel, 64-bit version of 3.3 use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpicc -o my__mpi_fftw.exe my_mpi_fftw.c -L/share/apps/fftw/3.3_64bit/lib -lfftw3_mpi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The include files for each release are in the &#039;include&#039; directory along side the version lib&lt;br /&gt;
directory.  The names of all available libraries for each release can be found by simply &lt;br /&gt;
listing the contents of the appropriate version&#039;s lib directory.  Do this for the names of&lt;br /&gt;
the threads-version of each library for instance.&lt;br /&gt;
&lt;br /&gt;
=== GNU Scientific Library ===&lt;br /&gt;
The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. It is free software under the GNU General Public License.&lt;br /&gt;
&lt;br /&gt;
The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting. There are over 1000 functions in total with an extensive test suite.&lt;br /&gt;
&lt;br /&gt;
Here is an example of code that uses GSL routines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;gsl/gsl_sf_bessel.h&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
int main(void)&lt;br /&gt;
{&lt;br /&gt;
  double x = 5.0;&lt;br /&gt;
  double y = gsl_sf_bessel_J0(x);&lt;br /&gt;
  printf(&amp;quot;J0(%g) = %.18e\n&amp;quot;, x, y);&lt;br /&gt;
  return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The example program has to be linked to the GSL library upon compilation:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gcc $(/share/apps/gsl/default/bin/gsl-config --cflags) test.c $(/share/apps/gsl/default/bin/gsl-config --libs)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The output is shown below, and should be correct to double-precision accuracy:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
J0(5) = -1.775967713143382642e-01&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Complete GNU Scientific Library documentation may be found of official website of the project: http://www.gnu.org/software/gsl/&lt;br /&gt;
&lt;br /&gt;
=== MKL === &lt;br /&gt;
Documentation to be added.&lt;br /&gt;
&lt;br /&gt;
=== IMSL === &lt;br /&gt;
&lt;br /&gt;
IMSL (International Mathematics and Statistics Library) is a commercial collection of software libraries of numerical analysis functionality that are implemented in the computer programming languages of C, Java, C#.NET, and Fortran by Visual Numerics. &lt;br /&gt;
&lt;br /&gt;
C and Fortran implementations if IMSL are installed on Bob cluster under &amp;lt;pre&amp;gt;/share/apps/imsl/cnl701 &amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;/share/apps/imsl/fnl600&amp;lt;/pre&amp;gt; respectively. &lt;br /&gt;
&lt;br /&gt;
==== Fortran Example ====&lt;br /&gt;
Here is an example of FORTRAN program that uses IMSL routines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
! Use files&lt;br /&gt;
 &lt;br /&gt;
       use rand_gen_int&lt;br /&gt;
       use show_int&lt;br /&gt;
 &lt;br /&gt;
!  Declarations&lt;br /&gt;
 &lt;br /&gt;
       real (kind(1.e0)), parameter:: zero=0.e0&lt;br /&gt;
       real (kind(1.e0)) x(5)&lt;br /&gt;
       type (s_options) :: iopti(2)=s_options(0,zero)&lt;br /&gt;
       character VERSION*48, LICENSE*48, VERML*48&lt;br /&gt;
       external VERML&lt;br /&gt;
 &lt;br /&gt;
!  Start the random number generator with a known seed.&lt;br /&gt;
       iopti(1) = s_options(s_rand_gen_generator_seed,zero)&lt;br /&gt;
       iopti(2) = s_options(123,zero)&lt;br /&gt;
       call rand_gen(x, iopt=iopti)&lt;br /&gt;
 &lt;br /&gt;
!     Verify the version of the library we are running&lt;br /&gt;
!     by retrieving the version number via verml().&lt;br /&gt;
!     Verify correct installation of the license number&lt;br /&gt;
!     by retrieving the customer number via verml().&lt;br /&gt;
!&lt;br /&gt;
      VERSION = VERML(1)&lt;br /&gt;
      LICENSE = VERML(4)&lt;br /&gt;
      WRITE(*,*) &#039;Library version:  &#039;, VERSION&lt;br /&gt;
      WRITE(*,*) &#039;Customer number:  &#039;, LICENSE&lt;br /&gt;
&lt;br /&gt;
!  Get the random numbers&lt;br /&gt;
       call rand_gen(x)&lt;br /&gt;
 &lt;br /&gt;
!  Output the random numbers&lt;br /&gt;
       call show(x,text=&#039;                              X&#039;)&lt;br /&gt;
&lt;br /&gt;
! Generate error&lt;br /&gt;
       iopti(1) = s_options(15,zero)&lt;br /&gt;
       call rand_gen(x, iopt=iopti)&lt;br /&gt;
 &lt;br /&gt;
       end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To compile this example use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 . /share/apps/imsl/imsl/fnl600/rdhin111e64/bin/fnlsetup.sh&lt;br /&gt;
&lt;br /&gt;
ifort -openmp -fp-model precise -I/share/apps/imsl/imsl/fnl600/rdhin111e64/include -o imslmp imslmp.f90 -L/share/apps/imsl/imsl/fnl600/rdhin111e64/lib -Bdynamic -limsl -limslsuperlu -limslscalar -limslblas -limslmpistub -limf -Xlinker -rpath -Xlinker /share/apps/imsl/imsl/fnl600/rdhin111e64/lib&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To run it in a batch mode use standard submit procedure described in section  [[#Program Compilation and Job Submission| Program Compilation and Job Submission]]. In case of successful run the following output will be generated:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 Library version:  IMSL Fortran Numerical Library, Version 6.0     &lt;br /&gt;
 Customer number:  702815                                          &lt;br /&gt;
                               X&lt;br /&gt;
     1 -    5   9.320E-01  7.865E-01  5.004E-01  5.535E-01  9.672E-01&lt;br /&gt;
&lt;br /&gt;
 *** TERMINAL ERROR 526 from s_error_post.  s_/rand_gen/ derived type option&lt;br /&gt;
 ***          array &#039;iopt&#039; has undefined option (15) at entry (1).&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== C Example ====&lt;br /&gt;
More complicated example in C. &lt;br /&gt;
&amp;lt;pre class=&amp;quot;brush:[code-alias]&amp;quot;&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;imsl.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(void)&lt;br /&gt;
{&lt;br /&gt;
    int         n = 3;&lt;br /&gt;
    float       *x;&lt;br /&gt;
    static float        a[] = { 1.0, 3.0, 3.0,&lt;br /&gt;
                                1.0, 3.0, 4.0,&lt;br /&gt;
                                1.0, 4.0, 3.0 };&lt;br /&gt;
    static float        b[] = { 1.0, 4.0, -1.0 };&lt;br /&gt;
&lt;br /&gt;
    /*&lt;br /&gt;
     * Verify the version of the library we are running by&lt;br /&gt;
     * retrieving the version number via imsl_version().&lt;br /&gt;
     * Verify correct installation of the error message file&lt;br /&gt;
     * by retrieving the customer number via imsl_version().&lt;br /&gt;
     */&lt;br /&gt;
    char        *library_version = imsl_version(IMSL_LIBRARY_VERSION);&lt;br /&gt;
    char        *customer_number = imsl_version(IMSL_LICENSE_NUMBER);&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Library version:  %s\n&amp;quot;, library_version);&lt;br /&gt;
    printf(&amp;quot;Customer number:  %s\n&amp;quot;, customer_number);&lt;br /&gt;
&lt;br /&gt;
                                /* Solve Ax = b for x */&lt;br /&gt;
    x = imsl_f_lin_sol_gen(n, a, b, 0);&lt;br /&gt;
                                /* Print x */&lt;br /&gt;
    imsl_f_write_matrix(&amp;quot;Solution, x of Ax = b&amp;quot;, 1, n, x, 0);&lt;br /&gt;
                               /* Generate Error to access error &lt;br /&gt;
                                  message file */&lt;br /&gt;
    n =-10;&lt;br /&gt;
&lt;br /&gt;
    printf (&amp;quot;\nThe next call will generate an error \n&amp;quot;);&lt;br /&gt;
    x = imsl_f_lin_sol_gen(n, a, b, 0);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To compile this example use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
. /share/apps/imsl/imsl/cnl701/rdhsg111e64/bin/cnlsetup.sh&lt;br /&gt;
&lt;br /&gt;
icc -ansi -I/share/apps/imsl/imsl/cnl701/rdhsg111e64/include -o cmath cmath.c -L/share/apps/imsl/imsl/cnl701/rdhsg111e64/lib -L/share/apps/intel/composerxe-2011.0.084/mkl/lib/em64t -limslcmath -limslcstat -limsllapack -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm -lgfortran -i_dynamic -Xlinker -rpath -Xlinker /share/apps/imsl/imsl/cnl701/rdhsg111e64/lib -Xlinker -rpath -Xlinker /share/apps/intel/composerxe-2011.0.084/mkl/lib/em64t&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To run the binary in a batch mode use standard submit procedure described in section  [[#Program Compilation and Job Submission| Program Compilation and Job Submission]]. In case of successful run the following output will be generated:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Library version:  IMSL C/Math/Library Version 7.0.1&lt;br /&gt;
Customer number:  702815&lt;br /&gt;
 &lt;br /&gt;
       Solution, x of Ax = b&lt;br /&gt;
         1           2           3&lt;br /&gt;
        -2          -2           3&lt;br /&gt;
&lt;br /&gt;
The next call will generate an error &lt;br /&gt;
&lt;br /&gt;
*** TERMINAL Error from imsl_f_lin_sol_gen.  The order of the matrix must be&lt;br /&gt;
***          positive while &amp;quot;n&amp;quot; = -10 is given.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Using_Modules_to_Run_Your_Applications&amp;diff=138</id>
		<title>Using Modules to Run Your Applications</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Using_Modules_to_Run_Your_Applications&amp;diff=138"/>
		<updated>2022-10-27T19:54:03Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Modules is a software package that provides for the fast and convenient management of the components of&lt;br /&gt;
a user&#039;s environment via &#039;&#039;&#039;modulefiles&#039;&#039;&#039;.  When executed by the module command each module file fully &lt;br /&gt;
configures the environment for its associated application or application group. The modules configuration language allows for the management of applications environment conflicts and dependencies as well. The modules software allows users to load (and unload and reload) an application and/or system environment&lt;br /&gt;
that is specific to their needs and avoids the need to set and manage a large, one-size-fits-all, generic environment&lt;br /&gt;
for everyone at login. Modules is the default approach to managing the user applications environment.  CUNY HPC Center system  BOB, currently used almost entirely for Gaussian jobs&lt;br /&gt;
will NOT be reconfigured with the modules software.  Module version 3.2.9 is the default on the CUNY HPC Center systems.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Modules, Learning by Example &#039;&#039;&#039;&lt;br /&gt;
** Example 1,  Basic Non-Cray System &lt;br /&gt;
**Example 2,  Less Basic From SALK (Cray System)&lt;br /&gt;
&lt;br /&gt;
Using the module package users can easily set a collection of environmental variables that are specific to their&lt;br /&gt;
compilation, parallel programming, and/or application requirements on the HPC Center&#039;s systems. The modules system&lt;br /&gt;
also makes it convenient to advance or regress compiler, parallel programming, or applications versions when defaults&lt;br /&gt;
are found to have bugs or performance issues.  Whatever the task, the modules package can adjust the environment&lt;br /&gt;
in an orderly way altering or setting of such environmental variables as PATH, MANPATH, LD_LIBRARY_PATH, etc.&lt;br /&gt;
and providing some basic descriptive information about the application version being loaded and purpose of the&lt;br /&gt;
modules file through the module help facility.   &lt;br /&gt;
&lt;br /&gt;
In addition to each application-specific modulefile, the module package functions through the use of a collection of&lt;br /&gt;
sub-commands given after the initial module command itself as in &amp;quot;module list&amp;quot; for instance.  All these module sub-&lt;br /&gt;
command are described in detail in the module man page (&amp;quot;man module&amp;quot;), but a list of some of the more important&lt;br /&gt;
and commonly used sub-commands is provided here:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Module sub-commands:&lt;br /&gt;
&lt;br /&gt;
list&lt;br /&gt;
load&lt;br /&gt;
unload&lt;br /&gt;
switch&lt;br /&gt;
avail&lt;br /&gt;
show&lt;br /&gt;
help&lt;br /&gt;
purge&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Modules, Learning by Example ==&lt;br /&gt;
The best way to explain how to use the modules package and its sub-command is to consider some simple&lt;br /&gt;
examples of a typical workflows that involve modules.  Here are two examples.  Again, for a more complete&lt;br /&gt;
description of the modules package please refer to &amp;quot;man module&amp;quot;.&lt;br /&gt;
&amp;lt;div class=&amp;quot; mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
=== Example 1,  Basic Non-Cray System ===&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;Consider the unmodified PATH variable right after login to one of the CUNY HPC Center systems.&lt;br /&gt;
Without any custom or local environmental path settings, it would look something like this with no&lt;br /&gt;
compiler, parallel programming model, or application-specific information in it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; echo $PATH | tr -s &#039;:&#039; &#039;\n&#039;&lt;br /&gt;
/scratch/username/bin&lt;br /&gt;
/usr/local/bin&lt;br /&gt;
/usr/bin&lt;br /&gt;
/bin&lt;br /&gt;
/usr/bin/X11&lt;br /&gt;
/usr/X11R6/bin&lt;br /&gt;
/usr/games&lt;br /&gt;
/opt/c3/bin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We take note that there appears to be no path to the application that we are interested in running which is Wolfram&#039;s&lt;br /&gt;
Mathematica in this example.  Typing &amp;quot;which math&amp;quot; to find Mathematica (&amp;quot;math&amp;quot; is the command-line name for Mathematica)&lt;br /&gt;
at the terminal yields:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
username@service0:~&amp;gt;  which math&lt;br /&gt;
which: no math in (/scratch/username/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/opt/c3/bin)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Mathematica executable &amp;quot;math&amp;quot; is not found in the default PATH variabl defined by the system at login. Modules can be&lt;br /&gt;
used to remedy this problem by adding the required path.  To check which module files (if any) are already loaded into&lt;br /&gt;
our environment, we are can type the &amp;quot;module list&amp;quot; sub-command at the terminal prompt:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; module list&lt;br /&gt;
No Modulefiles Currently Loaded.&lt;br /&gt;
username@service0:~&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
No modules loaded.  So the module file for Mathematica has not been loaded and it is no surprise&lt;br /&gt;
that the Mathematica command-line &amp;quot;math&amp;quot; was not found.  The next question is has the HPC Center&lt;br /&gt;
installed Mathematica on this system and created a module file for it?  To find this out we use &lt;br /&gt;
the &amp;quot;module avail&amp;quot; sub-command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; module avail&lt;br /&gt;
---------------------------- /share/apps/modules/default/modulefiles_UserApplications --------------------------------------&lt;br /&gt;
&lt;br /&gt;
adf/2012.01(default)         cesm/1.0.3                   hoomd/0.9.2(default)         ncar/5.2.0_NCL(default)      pgi/12.3(default)&lt;br /&gt;
auto3dem/4.02(default)       cesm/1.0.4(default)          intel/12.1.3.293(default)    nwchem/6.1.1(default)        phoenics/2009(default)&lt;br /&gt;
autodock/4.2.3(default)      cuda/5.0(default)            ls-dyna/6.0.0(default)       octopus/4.0.0(default)       r/2.14.1(default)&lt;br /&gt;
beagle/0.2(default)          gromacs/4.5.5_32bit          mathematica/8.0.4(default)   openmpi/1.5.5_intel(default) wrf/3.4.0(default)&lt;br /&gt;
best/2.2L(default)           gromacs/4.5.5_64bit(default) matlab/R2012a(default)       openmpi/1.5.5_pgi&lt;br /&gt;
&lt;br /&gt;
--------------------------------- /share/apps/modules/default/modulefiles_System -------------------------------------------&lt;br /&gt;
&lt;br /&gt;
module-info   modules       version/3.2.9&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The listing shows all available module files on this system, both those that are user-application related&lt;br /&gt;
and those that are more system related.  As shown in the output, these two types of module files are &lt;br /&gt;
stored in different directories. Looking through the application list, there is a module for Mathematica&lt;br /&gt;
version 8.0.4, which is also happens to be the default.  On this system, the modules package has only&lt;br /&gt;
just been installed, and therefore only one version of each application has been adapted to the module&lt;br /&gt;
system and that version is the default.&lt;br /&gt;
&lt;br /&gt;
The module file that is responsible for setting up correct environment needed to run Mathematica can &lt;br /&gt;
now be loaded:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mathematica&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Because there is only one version and it is the default, there is no need to include the version-specific&lt;br /&gt;
extension to load it.   To explicitly load version 8.0.4 (or any other specific and non-default version)&lt;br /&gt;
one would use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mathematica/8.0.4&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After loading, the environmental PATH variable includes the path to Mathematica:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; echo $PATH | tr -s &#039;:&#039; &#039;\n&#039;&lt;br /&gt;
/scratch/username/bin&lt;br /&gt;
/usr/local/bin&lt;br /&gt;
/usr/bin&lt;br /&gt;
/bin&lt;br /&gt;
/usr/bin/X11&lt;br /&gt;
/usr/X11R6/bin&lt;br /&gt;
/usr/games&lt;br /&gt;
/opt/c3/bin&lt;br /&gt;
/share/apps/mathematica/8.0.4/Executables&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be verified by rerunning the &amp;quot;which math&amp;quot; command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username@service0:~&amp;gt; which math&lt;br /&gt;
/share/apps/mathematica/8.0.4/Executables/math&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the head or login node enviroment variables are properly set, one can create a SLURM script&lt;br /&gt;
to run an Mathematica job on a compute node and ensure that the head or login node environment&lt;br /&gt;
just set is passed on to the compute nodes by using the &amp;quot;#SLURM -V&amp;quot; option inside you SLURM script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -N mmat8_serial1&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -l select=1:ncpus=1:mem=1920mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
# Just point to the serial executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin Mathematica Serial Run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
math -run &amp;lt;test_run.nb &amp;gt; output&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   Mathematica Serial Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the PATH variable in the login environment is now includes the location of the Mathematica &lt;br /&gt;
executable and the &amp;quot;#SLURM -V&amp;quot; option ensures that this is passed to the compute node that the&lt;br /&gt;
job is run on, the last line of the SLURM script will be executed without environment-related problems.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
=== Example 2,  Less Basic From SALK (Cray System) ===&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;As do all of the systems at the CUNY HPC Center, the Cray SALK has multiple compiler, parallel programming&lt;br /&gt;
models, libraries, and applications.  In addition, SALK uses a custom high-performance interconnect with its&lt;br /&gt;
own libraries, has its own compiler suite and compiling system, and many other custom libraries.  Setting up&lt;br /&gt;
and/or tearing down a given environment that makes all this work correctly is more complicated that it is on &lt;br /&gt;
the other systems at the HPC Center.  Modules simplifies this process tremendously for the user.&lt;br /&gt;
&lt;br /&gt;
Here is an example of how to swap out the default Cray compiler environment on SALK and swap in the &lt;br /&gt;
compiler suite from the Portland Group including all the right MPI libraries from Cray.  In this case, we swap in&lt;br /&gt;
a new release of the Portland Group compilers, not the current default on the Cray, and the version of the &lt;br /&gt;
NETCDF library that has been compiled with the Portland group.&lt;br /&gt;
&lt;br /&gt;
Having logged into SALK, we determine what modules have been load by default with &amp;quot;module list&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module list&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) modules/3.2.6.6&lt;br /&gt;
  2) nodestat/2.2-1.0400.31264.2.5.gem&lt;br /&gt;
  3) sdb/1.0-1.0400.32124.7.19.gem&lt;br /&gt;
  4) MySQL/5.0.64-1.0000.5053.22.1&lt;br /&gt;
  5) lustre-cray_gem_s/1.8.6_2.6.32.45_0.3.2_1.0400.6453.5.1-1.0400.32127.1.90&lt;br /&gt;
  6) udreg/2.3.1-1.0400.4264.3.1.gem&lt;br /&gt;
  7) ugni/2.3-1.0400.4374.4.88.gem&lt;br /&gt;
  8) gni-headers/2.1-1.0400.4351.3.1.gem&lt;br /&gt;
  9) dmapp/3.2.1-1.0400.4255.2.159.gem&lt;br /&gt;
 10) xpmem/0.1-2.0400.31280.3.1.gem&lt;br /&gt;
 11) hss-llm/6.0.0&lt;br /&gt;
 12) Base-opts/1.0.2-1.0400.31284.2.2.gem&lt;br /&gt;
 13) xtpe-network-gemini&lt;br /&gt;
 14) cce/8.0.7&lt;br /&gt;
 15) acml/5.1.0&lt;br /&gt;
 16) xt-libsci/11.1.00&lt;br /&gt;
 17) pmi/3.0.0-1.0000.8661.28.2807.gem&lt;br /&gt;
 18) rca/1.0.0-2.0400.31553.3.58.gem&lt;br /&gt;
 19) xt-asyncpe/5.13&lt;br /&gt;
 20) atp/1.5.1&lt;br /&gt;
 21) PrgEnv-cray/4.0.46&lt;br /&gt;
 22) xtpe-mc8&lt;br /&gt;
 23) cray-mpich2/5.5.3&lt;br /&gt;
 24) SLURM/11.3.0.121723&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From the list, we see that the Cray Programming Environment (&amp;quot;PrgEnv-cray/4.0.46&amp;quot;) and the Cray Compiler&lt;br /&gt;
environment are loaded (&amp;quot;cce/8.0.7&amp;quot;) by default among other things (SLURM, MPICH, etc.).  To unload these&lt;br /&gt;
Cray modules and load in the Portland Group (PGI) equivalents we need to know the names of the PGI &lt;br /&gt;
modules.   The &amp;quot;module avail&amp;quot; command will tell us this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module avail&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
(several sections of output removed)&lt;br /&gt;
.&lt;br /&gt;
.&lt;br /&gt;
------------------------------------------------ /opt/modulefiles -----------------------------------------------------&lt;br /&gt;
Base-opts/1.0.2-1.0400.31284.2.2.gem(default)     gcc/4.1.2                                         SLURM/11.2.0.113417&lt;br /&gt;
PrgEnv-cray/3.1.61                                gcc/4.2.4                                         SLURM/11.3.0.121723(default)&lt;br /&gt;
PrgEnv-cray/4.0.46(default)                       gcc/4.4.2                                         petsc/3.1.08&lt;br /&gt;
PrgEnv-gnu/3.1.61                                 gcc/4.4.4                                         petsc/3.1.09&lt;br /&gt;
PrgEnv-gnu/4.0.46(default)                        gcc/4.5.1                                         petsc-complex/3.1.08&lt;br /&gt;
PrgEnv-intel/3.1.61                               gcc/4.5.2                                         petsc-complex/3.1.09&lt;br /&gt;
PrgEnv-intel/4.0.46(default)                      gcc/4.5.3                                         pgi/12.10&lt;br /&gt;
PrgEnv-pathscale/3.1.61                           gcc/4.6.1                                         pgi/12.3&lt;br /&gt;
PrgEnv-pathscale/4.0.46(default)                  gcc/4.7.1(default)                                pgi/12.6(default)&lt;br /&gt;
PrgEnv-pgi/3.1.61                                 hss-llm/6.0.0(default)                            pgi/12.8&lt;br /&gt;
PrgEnv-pgi/4.0.46(default)                        intel/12.1.1.256                                  wrf/3.3.0&lt;br /&gt;
acml/4.4.0                                        intel/12.1.4.319(default)                         wrf/3.4.0(default)&lt;br /&gt;
acml/5.1.0(default)                               intel/12.1.5.339                                  xt-asyncpe/5.01&lt;br /&gt;
admin-modules/1.0.2-1.0400.31284.2.2.gem(default) java/jdk1.6.0_24                                  xt-asyncpe/5.05&lt;br /&gt;
amber/12(default)                                 java/jdk1.7.0_03(default)                         xt-asyncpe/5.13(default)&lt;br /&gt;
cce/8.0.7(default)                                mazama/6.0.0(default)                             xt-libsci/11.0.00&lt;br /&gt;
chapel/1.4.0                                      modules/3.2.6.6(default)                          xt-libsci/11.0.04&lt;br /&gt;
chapel/1.5.0(default)                             mrnet/3.0.0(default)                              xt-libsci/11.1.00(default)&lt;br /&gt;
fftw/2.1.5.3                                      pathscale/4.0.12.1(default)                       xt-papi/4.2.0&lt;br /&gt;
fftw/3.2.2.1(default)                             pathscale/4.0.9                                   xt-papi/4.3.0(default)&lt;br /&gt;
fftw/3.3.0.1                                      SLURM/11.1.0.111761&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are several versions of the PGI compilers and two version of the PGI Programming Environment for&lt;br /&gt;
the Cray (SALK).   We are interested in loading PGI&#039;s 12.10 release (not the default which is &amp;quot;pgi/12.6&amp;quot;) and the&lt;br /&gt;
most current release of the PGI programming environment (&amp;quot;PrgEnv-pgi/4.0.46&amp;quot;), which is the default.  The &lt;br /&gt;
PGI programming environment for the Cray provides all the environmental settings required to use the &lt;br /&gt;
PGI compilers on the Cray which includes a number of custom libraries.  &lt;br /&gt;
&lt;br /&gt;
Here is a series of module commands to unload the Cray defaults, load the PGI modules mentioned,&lt;br /&gt;
and load version 4.2.0 of NETCDF compiled with the PGI compilers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module unload PrgEnv-cray&lt;br /&gt;
user@salk:~&amp;gt; module load PrgEnv-pgi&lt;br /&gt;
user@salk:~&amp;gt; module unload pgi&lt;br /&gt;
user@salk:~&amp;gt; module load pgi/12.10&lt;br /&gt;
user@salk:~&amp;gt; &lt;br /&gt;
user@salk:~&amp;gt; module load netcdf/4.2.0&lt;br /&gt;
user@salk:~&amp;gt;&lt;br /&gt;
user@salk;~&amp;gt; cc -V&lt;br /&gt;
/opt/cray/xt-asyncpe/5.13/bin/cc: INFO: Compiling with CRAYPE_COMPILE_TARGET=native.&lt;br /&gt;
&lt;br /&gt;
pgcc 12.10-0 64-bit target on x86-64 Linux &lt;br /&gt;
Copyright 1989-2000, The Portland Group, Inc.  All Rights Reserved.&lt;br /&gt;
Copyright 2000-2012, STMicroelectronics, Inc.  All Rights Reserved.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Several comments about this series of command are perhaps useful.  First, the first three commands&lt;br /&gt;
do not include version numbers and will therefore load or unload the current default versions.  In &lt;br /&gt;
the third line, we unload the default version of the PGI compiler (version 12.6) which is loaded with&lt;br /&gt;
the rest of the PGI Programming Environment in the second line.  We then load the non-default&lt;br /&gt;
and more recent release from PGI, version 12.10 in the fourth line.   Later, we load NETCDF version&lt;br /&gt;
4.2.0 which, because we have already loaded the PGI Programming Environment, will load the version&lt;br /&gt;
of NETCDF 4.2.0 compiled with the PGI compilers.  Finally, we check to see which compiler the Cray &amp;quot;cc&amp;quot;&lt;br /&gt;
compiler wrapper actually invokes after this sequence of module commands.  We see that indeed &amp;quot;pgcc&amp;quot;&lt;br /&gt;
version 12.10 is being used.&lt;br /&gt;
&lt;br /&gt;
We can confirm all this by again entering &amp;quot;module list&amp;quot;.   Notice that the Cray-related compiler modules&lt;br /&gt;
have been replaced by those from PGI and that NETCDF version 4.2.0 has been loaded.  We are ready&lt;br /&gt;
to use new PGI compiler suite based environment.  It is left as an exercise to the reader to figure out&lt;br /&gt;
how the series of commands listed above could have been shortened by using the &amp;quot;module swap&amp;quot; sub-&lt;br /&gt;
command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
user@salk:~&amp;gt; module list&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) modules/3.2.6.6&lt;br /&gt;
  2) nodestat/2.2-1.0400.31264.2.5.gem&lt;br /&gt;
  3) sdb/1.0-1.0400.32124.7.19.gem&lt;br /&gt;
  4) MySQL/5.0.64-1.0000.5053.22.1&lt;br /&gt;
  5) lustre-cray_gem_s/1.8.6_2.6.32.45_0.3.2_1.0400.6453.5.1-1.0400.32127.1.90&lt;br /&gt;
  6) udreg/2.3.1-1.0400.4264.3.1.gem&lt;br /&gt;
  7) ugni/2.3-1.0400.4374.4.88.gem&lt;br /&gt;
  8) gni-headers/2.1-1.0400.4351.3.1.gem&lt;br /&gt;
  9) dmapp/3.2.1-1.0400.4255.2.159.gem&lt;br /&gt;
 10) xpmem/0.1-2.0400.31280.3.1.gem&lt;br /&gt;
 11) hss-llm/6.0.0&lt;br /&gt;
 12) Base-opts/1.0.2-1.0400.31284.2.2.gem&lt;br /&gt;
 13) xtpe-network-gemini&lt;br /&gt;
 14) xtpe-mc8&lt;br /&gt;
 15) cray-mpich2/5.5.3&lt;br /&gt;
 16) SLURM/11.3.0.121723&lt;br /&gt;
 17) xt-libsci/11.1.00&lt;br /&gt;
 18) pmi/3.0.0-1.0000.8661.28.2807.gem&lt;br /&gt;
 19) xt-asyncpe/5.13&lt;br /&gt;
 20) atp/1.5.1&lt;br /&gt;
 21) PrgEnv-pgi/4.0.46&lt;br /&gt;
 22) pgi/12.10&lt;br /&gt;
 23) hdf5/1.8.8&lt;br /&gt;
 24) netcdf/4.2.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=CoArray_Fortran_and_Unified_Parallel_C_(PGAS)_Program_Compilation_and_SLURM_Job_Submission&amp;diff=137</id>
		<title>CoArray Fortran and Unified Parallel C (PGAS) Program Compilation and SLURM Job Submission</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=CoArray_Fortran_and_Unified_Parallel_C_(PGAS)_Program_Compilation_and_SLURM_Job_Submission&amp;diff=137"/>
		<updated>2022-10-27T19:54:02Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== CoArray Fortran and Unified Parallel C (PGAS) Program Compilation and SLURM Job Submission ==&lt;br /&gt;
As part of its plan to offer CUNY HPC Center users a unique variety of HPC parallel programming alternatives&lt;br /&gt;
(beyond even those described above), the HPC Center support a two cabinet 2816 core Cray XE6m system&lt;br /&gt;
called SALK. This system supports two newer and similar, language-integrated and highly scalable approaches&lt;br /&gt;
to parallel programming, CoArray Fortran (CAF) and Unified Parallel C (UPC).  Both are extensions of their parent&lt;br /&gt;
languages, Fortran and C respectively, and offer a symbolically concise alternative to the &#039;&#039;de facto&#039;&#039; standard,&lt;br /&gt;
message-passing model, MPI.  CAF and UPC are so-called Partitioned Global Address Space (PGAS) parallel &lt;br /&gt;
programming models.  Unlike MPI, CAF and UPC are not based on a subroutine library call API.&lt;br /&gt;
&lt;br /&gt;
Both MPI and the PGAS approach to parallel programming rely on a Single Program Multiple Data (SPMD)&lt;br /&gt;
model.  In the SPMD parallel programming model, identical collaborating programs (with fully separate memory&lt;br /&gt;
spaces, or program images) are executed by different processors that may or may not be separated by a network.&lt;br /&gt;
Each processor-program produces different parts of the result in parallel by working on different data and taking&lt;br /&gt;
conditionally different paths through the same code. The PGAS approach differs from MPI in that it abstracts away&lt;br /&gt;
as much as possible, reducing the way that communication is expressed to minimal built-in extensions to the base&lt;br /&gt;
language, in our case C and Fortran.  In large part, CAF and UPC are free of extension-related, explicit library calls.&lt;br /&gt;
With the underlying communication layer abstracted away, PGAS languages &#039;&#039;appear&#039;&#039; to provide a singular, global&lt;br /&gt;
memory space spanning its processes.&lt;br /&gt;
&lt;br /&gt;
In addition, communication among processes in a PGAS program is &#039;&#039;one-sided&#039;&#039; in the sense that any process&lt;br /&gt;
can read and/or write into the memory of any other process without informing it of its actions.  Such one-sided&lt;br /&gt;
communication has the advantage of being economical, lowering the latency (first byte delay) that is part of the cost&lt;br /&gt;
of communication among different parallel processes.  Lower latency parallel programs are generally more scalable&lt;br /&gt;
because they waste less time in communication, especially when the data to be moved are small in size, in finer-grained&lt;br /&gt;
communication patterns.&lt;br /&gt;
&lt;br /&gt;
Summarizing, PGAS languages such as CAF and UPC offer the following &#039;&#039;potential&#039;&#039; advantages over MPI:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. Explicit communication is abstracted out of the PGAS programming model.&lt;br /&gt;
&lt;br /&gt;
2. Process memory is logically unified into a global address space.&lt;br /&gt;
&lt;br /&gt;
3. Parallel work is economically expressed through simple extensions&lt;br /&gt;
    to a base language, rather than through a library-call-based API.&lt;br /&gt;
&lt;br /&gt;
4. Parallel coding is easier and more intuitive.&lt;br /&gt;
&lt;br /&gt;
5. Performance and scalability are better because communication latency is lower.&lt;br /&gt;
&lt;br /&gt;
6. Implementation of fine-grained communication patterns is faster, easier.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The primary drawbacks of PGAS programming models include much less wide-spread support than&lt;br /&gt;
MPI on common case HPC system architectures such as traditional HPC clusters, and the need for special&lt;br /&gt;
hardware support to get best-case performance out of the PGAS model.  Here at the CUNY HPC Center,&lt;br /&gt;
the Cray XE6m system, SALK, has a custom interconnect (Gemini) that supports both UPC and CAF.  These&lt;br /&gt;
PGAS languages can be run on standard clusters, but the performance is not typically as good. The HPC Center&lt;br /&gt;
supports Berkeley UPC and Intel CAF on top standard cluster interconnects without the advantage of PGAS&lt;br /&gt;
hardware support.&lt;br /&gt;
&lt;br /&gt;
=== An Example CoArray Fortran (CAF) Code ===&lt;br /&gt;
The following simple example program includes some of the essential features of the CoArray Fortran (CAF) &lt;br /&gt;
programming model, including multiple processor, image-spanning co-array variable declaration; one-sided&lt;br /&gt;
data transfer between CAF&#039;s memory-space-distinct images via simple assignment statements; and the use of&lt;br /&gt;
critical regions and synchronization barriers.  No attempt is made here to tutor the reader in all of the features&lt;br /&gt;
of the CAF; rather, the goal is to give the reader a feel for the CAF extensions adopted in the Fortran 2008&lt;br /&gt;
programming language standard that now includes CoArrays.   This example, which computes PI by numerical&lt;br /&gt;
integration, can be cut and pasted into a file and run on SALK.&lt;br /&gt;
&lt;br /&gt;
A tutorial on the CAF parallel programming model can be found here [http://www2.hpcl.gwu.edu/pgas09/tutorials/caf_tut.pdf],&lt;br /&gt;
a more formal description of the language specifications here [http://caf.rice.edu/documentation/John-Reid-N1824-2010-04-21.pdf],&lt;br /&gt;
and the actual CAF standard document as defined and adopted by the Fortran standard&#039;s committee&lt;br /&gt;
for Fortran 2008 here [http://caf.rice.edu/documentation/Fortran-2008-Draft-2010-04-20.pdf]. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
! &lt;br /&gt;
!  Computing PI by Numerical Integration in CAF&lt;br /&gt;
!&lt;br /&gt;
&lt;br /&gt;
program int_pi()&lt;br /&gt;
!&lt;br /&gt;
implicit none&lt;br /&gt;
!&lt;br /&gt;
integer :: start, end&lt;br /&gt;
integer :: my_image, tot_images&lt;br /&gt;
integer :: i = 0, rem = 0, mseg = 0, nseg = 0&lt;br /&gt;
!&lt;br /&gt;
real :: f, x&lt;br /&gt;
!&lt;br /&gt;
&lt;br /&gt;
! Declare two CAF scalar CoArrays, each with one copy per image&lt;br /&gt;
&lt;br /&gt;
real :: local_pi[*], global_pi[*]&lt;br /&gt;
&lt;br /&gt;
! Define integrand with Fortran statement function, set result&lt;br /&gt;
! accuracy through the number of segments&lt;br /&gt;
&lt;br /&gt;
f(x) = 1.0/(1.0+x*x)&lt;br /&gt;
nseg = 4096&lt;br /&gt;
&lt;br /&gt;
! Find out my image name and the total number of images&lt;br /&gt;
&lt;br /&gt;
my_image   = this_image()&lt;br /&gt;
tot_images = num_images()&lt;br /&gt;
&lt;br /&gt;
! Each image initializes its part of the CoArrays to zero&lt;br /&gt;
&lt;br /&gt;
local_pi  = 0.0&lt;br /&gt;
global_pi = 0.0&lt;br /&gt;
&lt;br /&gt;
! Partition integrand segments across CAF images (processors)&lt;br /&gt;
&lt;br /&gt;
rem = mod(nseg,tot_images)&lt;br /&gt;
&lt;br /&gt;
mseg  = nseg / tot_images&lt;br /&gt;
start = mseg * (my_image - 1)&lt;br /&gt;
end   = (mseg * my_image) - 1&lt;br /&gt;
&lt;br /&gt;
if ( my_image .eq. tot_images ) end = end + rem&lt;br /&gt;
&lt;br /&gt;
! Compute local partial sums on each CAF image (processor)&lt;br /&gt;
&lt;br /&gt;
do i = start,end&lt;br /&gt;
  local_pi = local_pi + f((.5 + i)/(nseg))&lt;br /&gt;
&lt;br /&gt;
! The above is equivalent to the following more explicit code:&lt;br /&gt;
!&lt;br /&gt;
! local_pi[my_image]= local_pi[my_image] + f((.5 + i)/(nseg))&lt;br /&gt;
!&lt;br /&gt;
&lt;br /&gt;
enddo&lt;br /&gt;
&lt;br /&gt;
local_pi = local_pi * 4.0 / nseg&lt;br /&gt;
&lt;br /&gt;
! Add local, partial sums to single global sum on image 1 only. Use&lt;br /&gt;
! critical region to prevent read-before-write race conditions. In such&lt;br /&gt;
! a region, only one image at a time may pass.&lt;br /&gt;
&lt;br /&gt;
critical&lt;br /&gt;
 global_pi[1] = global_pi[1] + local_pi&lt;br /&gt;
end critical&lt;br /&gt;
&lt;br /&gt;
! Ensure all partial sums have been added using CAF &#039;sync all&#039; barrier&lt;br /&gt;
! construct before writing out results&lt;br /&gt;
&lt;br /&gt;
sync all&lt;br /&gt;
&lt;br /&gt;
! Only CAF image 1 prints the global result&lt;br /&gt;
&lt;br /&gt;
if( this_image() == 1) write(*,&amp;quot;(&#039;PI = &#039;, f10.6)&amp;quot;) global_pi&lt;br /&gt;
&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This sample code computes PI in parallel using a numerical integration scheme. Taking its key CAF-specific features in order, first&lt;br /&gt;
we find the declaration of two simple scalar co-arrays (local_pi and global_pi) using CAF&#039;s square-bracket&lt;br /&gt;
notation for the co-array, (e.g. &#039;&#039;sname[*]&#039;&#039;, &#039;&#039;vname(1:100)[*]&#039;&#039;, or &#039;&#039;vname(1:8,1:4)[1:4,*]&#039;&#039;).  The square bracket notation follows the&lt;br /&gt;
standard Fortran array notation rules, except that the last dimension is always indicated with a asterisk (&#039;&#039;&#039;*&#039;&#039;&#039;) that is expanded to&lt;br /&gt;
ensure that the number of co-arrays co-dimensioned is equal to the number of images (processes) the application has launched.  &lt;br /&gt;
&lt;br /&gt;
Next,  the example uses the &#039;&#039;this_image()&#039;&#039; and &#039;&#039;num_images()&#039;&#039; intrinsic functions to determine each image&#039;s image ID (a number&lt;br /&gt;
from 1 to the number of processors requested) and the total number of images or processes requested by the job.  These functions&lt;br /&gt;
return values are stored in typical, image-local, Fortran integer variables and are used later in the example to partition the work&lt;br /&gt;
among the processors and define image-specific paths through the code.  After the integral segments are partitioned among the&lt;br /&gt;
CoArray images or processes (using the &#039;&#039;start&#039;&#039; and &#039;&#039;end&#039;&#039; variables), each image computes its piece of the integral in what is a&lt;br /&gt;
a standard Fortran &#039;&#039;do loop&#039;&#039;.  However, the variable &#039;&#039;local_pi&#039;&#039;, as noted above, is a co-array.  Two notations, one implicit and one&lt;br /&gt;
explicit (but commented out) are presented.  The implicit code, with it square-bracket notation dropped, is allowed (and encouraged&lt;br /&gt;
for optimization reasons) when only the image-local part of a co-array is referenced by a given image.  The explicit code makes it&lt;br /&gt;
clear through the square-bracket extension &#039;&#039;[my_image]&#039;&#039; that each image is working with a local element of the &#039;&#039;local_p&#039;&#039;i co-array.&lt;br /&gt;
When the practice of dropping the []s is adopted as a notational covention, all remote, co-array references (which are more time &lt;br /&gt;
consuming operations) in are immediately, visually identifiable by square-bracket suffixes in present the code.  Optimal coding&lt;br /&gt;
practice should seek to minimize the use of square-bracketed references where possible.&lt;br /&gt;
&lt;br /&gt;
With the local, partial sums computed by each image and placed in their piece of the &#039;&#039;local_pi[*]&#039;&#039; co-array, a global sum is then&lt;br /&gt;
safely computed and written out only on image 1 with the help of a CAF critical region.  Within a critical region, only one image (process)&lt;br /&gt;
may pass at a time.  This ensures that &#039;&#039;global_pi[1]&#039;&#039; is accurately summed from each &#039;&#039;local_pi[my_image]&#039;&#039; avoiding mistakes that&lt;br /&gt;
might be caused by simultaneous reads of the same still partially summed &#039;&#039;global_pi[1]&#039;&#039; before each image-specific increments&lt;br /&gt;
were written.  Here, we see the variable &#039;&#039;global_pi[1]&#039;&#039; with the square-bracket notation which is a reminder that each image (process)&lt;br /&gt;
is writing its result into the memory space on image 1.  This is a remote write for all images, except image 1.  &lt;br /&gt;
&lt;br /&gt;
The last section of the code synchronizes (&#039;&#039;sync all&#039;&#039;) the images to ensure all partial sums have been added, and then has&lt;br /&gt;
image 1 write out the global result.  Note that, as writtenhere, only image 1 has the global result.  For a more detailed treatment of&lt;br /&gt;
the CoArray Fortran language extension, now part of the Fortran 2008 standard, please see the web references included above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CoArray Fortran on both its Cray XE6 system, SALK, (which has custom hardware and software support&lt;br /&gt;
for the UPC and CAF PGAS languages) and on its other systems where the Intel Cluster Studio provides a beta-level implementation&lt;br /&gt;
of CoArray Fortran layered on top of Intel&#039;s MPI library, an approach that offers CAF&#039;s coding simplicity, but no performance advantage&lt;br /&gt;
over MPI.&lt;br /&gt;
&lt;br /&gt;
Here, the process of compiling a CAF program both for Cray&#039;s CAF on SALK, and for Intel&#039;s CAF on the HPC Center&#039;s other systems is&lt;br /&gt;
described.  On the Cray, compiling a CAF program, such as the example above, simply requires adding an option to the Cray Fortran&lt;br /&gt;
compiler, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk:&lt;br /&gt;
salk: module load PrgEnv-cray&lt;br /&gt;
salk:&lt;br /&gt;
salk: ftn -h caf -o int_PI.exe int_PI.f90&lt;br /&gt;
salk:&lt;br /&gt;
salk: ls&lt;br /&gt;
int_PI.exe&lt;br /&gt;
salk:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the sequence above, first the Cray programming environment is loaded using the &#039;module&#039; command; then&lt;br /&gt;
the Cray Fortran compiler is invoked with the &#039;&#039;-h caf&#039;&#039; option to include the CAF features of the Fortran compiler.&lt;br /&gt;
The result is a CAF-enabled executable that can be run with Cray&#039;s parallel job initiation command &#039;aprun&#039;.   This&lt;br /&gt;
compilation was done in dynamic mode so that any number of processors (CAF images) can be selected at run time&lt;br /&gt;
using the &#039;&#039;-n ##&#039;&#039; option to Cray&#039;s &#039;aprun&#039; command. The required form of the &#039;aprun&#039; command is shown below&lt;br /&gt;
in the section on CAF program job submission using SLURM on the Cray.  &lt;br /&gt;
&lt;br /&gt;
To compile for a fixed number of processors (a static compile) or CAF images use the &#039;&#039;-X ##&#039;&#039; option on the Cray,&lt;br /&gt;
as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk:&lt;br /&gt;
salk: ftn -X 32 -h caf -o int_PI_32.exe int_PI.f90&lt;br /&gt;
salk:&lt;br /&gt;
salk: ls&lt;br /&gt;
int_PI_32.exe&lt;br /&gt;
salk:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example, the PI example program has been compiled for 32 processors or CAF images,&lt;br /&gt;
and therefore &#039;&#039;must&#039;&#039; be invoked with that many processors on the &#039;aprun&#039; command line:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aprun -n 32 -N 16 ./int_PI_32.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On the HPC Center&#039;s other systems, compilation is conceptually similar, but uses the Intel Fortran&lt;br /&gt;
compiler &#039;ifort&#039; and requires a CAF configuration file to be defined by the user.  Here is a typical configuration&lt;br /&gt;
file to compile statically for 16 CAF images followed by the compilation command.  This compilation&lt;br /&gt;
requests a distributed mode compilation in which distinct CAF images are not expected to be on the&lt;br /&gt;
same physical node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
andy$cat cafconf.txt&lt;br /&gt;
-rr -envall -n 16 ./int_PI.exe&lt;br /&gt;
andy$&lt;br /&gt;
andy$ifort -o int_PI.exe -coarray=distributed -coarray-config-file=cafconf.txt int_PI.f90&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Intel CAF compiler is relatively new and has had limited testing on CUNY HPC systems. It also makes&lt;br /&gt;
use of Intel&#039;s MPI rather than the CUNY HPC Center default, OpenMPI, which means that Intel CAF jobs will&lt;br /&gt;
not be properly account for.  As such,  we recommend that Intel CAF compiler be used for development and&lt;br /&gt;
testing only, while production CAF codes be run on SALK using Cray&#039;s CAF compiler.  An upgrade is planned for&lt;br /&gt;
the Intel Compiler Suite in the near future, and this should improve the performance and functionality of &lt;br /&gt;
Intel&#039;s CAF compiler release.   Additional documentation on using Intel CoArray Fortran is available here.&lt;br /&gt;
&lt;br /&gt;
=== Submitting CoArray Fortran Parallel Programs Using SLURM ===&lt;br /&gt;
Finally, two SLURM scripts that will run the above CAF executable.  First, one for the Cray XE6 system, SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N CAF_example&lt;br /&gt;
#SLURM -l select=64:ncpus=1:mem=2000mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -o int_PI.out&lt;br /&gt;
#SLURM -e int_PI.err&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
aprun -n 64 -N 16 ./int_PI.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Above, the dynamically compiled executable is run on 64 SALK, Cray XE6 cores (-n 64) with 16 cores&lt;br /&gt;
packed to a physical node (-N 16).  More detail is presented below on SLURM job submission to the Cray&lt;br /&gt;
and on the use of of the Cray&#039;s &#039;aprun&#039; command.  On the Cray, &#039;man aprun&#039; provides an important and&lt;br /&gt;
detailed account of the &#039;aprun&#039; command-line options and their function. One cannot fully understand&lt;br /&gt;
job control and submission on the Cray (SALK) without understanding the &#039;aprun&#039; command.&lt;br /&gt;
&lt;br /&gt;
A SLURM script for the example code compiled dynamically (or statically) for 16 processors with the Intel&lt;br /&gt;
compiler (ifort) for execution on one of the HPC Center&#039;s more traditional HPC clusters looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N CAF_example&lt;br /&gt;
#SLURM -l select=16:ncpus=1:mem=1920mb&lt;br /&gt;
#SLURM -l place=scatter&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo -n &amp;quot;The primary compute node hostname is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo -n &amp;quot;The location of the SLURM nodefile is: &amp;quot;&lt;br /&gt;
echo $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;The contents of the SLURM nodefile are: &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
cat  $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
NCNT=`uniq $SLURM_NODEFILE | wc -l - | cut -d &#039; &#039; -f 1`&lt;br /&gt;
echo -n &amp;quot;The node count determined from the nodefile is: &amp;quot;&lt;br /&gt;
echo $NCNT&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Change to working directory&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;You are using the following &#039;mpiexec&#039; and &#039;mpdboot&#039; commannds: &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
type mpiexec&lt;br /&gt;
type mpdboot&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting the Intel &#039;mpdboot&#039; daemon on $NCNT nodes ... &amp;quot;&lt;br /&gt;
mpdboot -n $NCNT --verbose --file=$SLURM_NODEFILE -r ssh&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
mpdtrace&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting an Intel CAF job requesting 16 cores ... &amp;quot;&lt;br /&gt;
&lt;br /&gt;
./int_PI.exe&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;CAF job finished ... &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Making sure all mpd daemons are killed ... &amp;quot;&lt;br /&gt;
mpdallexit&lt;br /&gt;
echo &amp;quot;SLURM CAF script finished ... &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here, the SLURM script requests 16 processors (CAF images).  It simply names the executable&lt;br /&gt;
itself to setup the Intel CAF runtime environment, engage the 16 processors, and initiate&lt;br /&gt;
execution.  This script is more elaborate because it include the procedure for setting up &lt;br /&gt;
and breaking down the Intel MPI environment on the nodes that SLURM has selected to run&lt;br /&gt;
the job.&lt;br /&gt;
&lt;br /&gt;
=== An Example Unified Parallel C (UPC) Code ===&lt;br /&gt;
The following simple example program includes the essential features of the Unified Parallel C (UPC) &lt;br /&gt;
programming model, including shared (globally distributed) variable declaration and blocking, one-&lt;br /&gt;
sided data transfer between UPC&#039;s memory-space distinct threads via simple assignment statements, and&lt;br /&gt;
synchronization barriers.  No attempt is made here to tutor the reader in all of the features of the UPC;&lt;br /&gt;
rather the goal is to give the reader a feel for basic UPC extensions to the C programming language. &lt;br /&gt;
A tutorial on the UPC programming model can be found here [http://upc.gwu.edu/tutorials/UPC-SC05.pdf],&lt;br /&gt;
a user guide here [http://upc.gwu.edu/downloads/Manual-1.2.pdf], and a more formal description of&lt;br /&gt;
the language specifications here [http://upc.lbl.gov/docs/user/upc_spec_1.2.pdf].  Cray also has its own &lt;br /&gt;
documentation on UPC [http://docs.cray.com/books/S-2179-50/html-S-2179-50/z1035483822pvl.html]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
// &lt;br /&gt;
//  Computing PI by Numerical Integration in UPC&lt;br /&gt;
//&lt;br /&gt;
&lt;br /&gt;
// Select memory consistency model (default).&lt;br /&gt;
&lt;br /&gt;
#include&amp;lt;upc_relaxed.h&amp;gt; &lt;br /&gt;
&lt;br /&gt;
#include&amp;lt;math.h&amp;gt;&lt;br /&gt;
#include&amp;lt;stdio.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// Define integrand with a macro and set result accuracy&lt;br /&gt;
&lt;br /&gt;
#define f(x) (1.0/(1.0+x*x))&lt;br /&gt;
#define N 4096&lt;br /&gt;
&lt;br /&gt;
// Declare UPC shared scalar, shared vector array, and UPC lock variable.&lt;br /&gt;
&lt;br /&gt;
shared float global_pi = 0.0;&lt;br /&gt;
shared [1] float local_pi[THREADS];&lt;br /&gt;
upc_lock_t *lock;&lt;br /&gt;
&lt;br /&gt;
void main(void)&lt;br /&gt;
{&lt;br /&gt;
   int i;&lt;br /&gt;
&lt;br /&gt;
   // Allocate a single, globally-shared UPC lock. This &lt;br /&gt;
   // function is collective, intial state is unlocked.&lt;br /&gt;
&lt;br /&gt;
   lock = upc_all_lock_alloc();&lt;br /&gt;
&lt;br /&gt;
   // Each UPC thread initializes its local piece of the&lt;br /&gt;
   // shared array.&lt;br /&gt;
&lt;br /&gt;
   local_pi[MYTHREAD] = 0.0;&lt;br /&gt;
&lt;br /&gt;
   // Distribute work across threads using local part of shared&lt;br /&gt;
   // array &#039;local_pi&#039; to compute PI partial sum on thread (processor)&lt;br /&gt;
&lt;br /&gt;
   for(i = 0; i &amp;lt;  N; i++) {&lt;br /&gt;
       if(MYTHREAD == i%THREADS) local_pi[MYTHREAD] += (float) f((.5 + i)/(N));&lt;br /&gt;
   } &lt;br /&gt;
&lt;br /&gt;
   local_pi[MYTHREAD] *= (float) (4.0 / N);&lt;br /&gt;
&lt;br /&gt;
   // Compile local, partial sums to single global sum.&lt;br /&gt;
   // Use locks to prevent read-before-write race conditions.&lt;br /&gt;
&lt;br /&gt;
   upc_lock(lock);&lt;br /&gt;
   global_pi += local_pi[MYTHREAD];&lt;br /&gt;
   upc_unlock(lock);&lt;br /&gt;
&lt;br /&gt;
   // Ensure all partial sums have been added with UPC barrier.&lt;br /&gt;
&lt;br /&gt;
   upc_barrier;&lt;br /&gt;
&lt;br /&gt;
   // UPC thread 0 prints the results and frees the lock.&lt;br /&gt;
&lt;br /&gt;
   if(MYTHREAD==0) printf(&amp;quot;PI = %f\n&amp;quot;,global_pi);&lt;br /&gt;
   if(MYTHREAD==0) upc_lock_free(lock);&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This sample code computes PI in parallel using a numerical integration scheme.&lt;br /&gt;
Taking the key UPC-specific features present in this example in order, first we find the&lt;br /&gt;
declaration of the memory consistency model to be used in this code.  The default choice&lt;br /&gt;
is &#039;&#039;relaxed&#039;&#039; which is selected explicitly here.  The relaxed choice places the burden of &lt;br /&gt;
ensuring that shared memory operations in the code that are dependent and must be&lt;br /&gt;
order on the programmer through the use of barriers, fences, and locks.  This code&lt;br /&gt;
includes explicit locks and barriers to ensure memory operations are complete and &lt;br /&gt;
processor have been synchronized.&lt;br /&gt;
&lt;br /&gt;
Next, three declarations outside the main body of the application demonstrate the&lt;br /&gt;
use of UPC&#039;s &#039;&#039;shared&#039;&#039; type.  First, a scalar shared variable &#039;&#039;global_pi&#039;&#039; is declared.&lt;br /&gt;
This variable can be read from and written to by any of the UPC threads (processors)&lt;br /&gt;
allocated by the runtime environment to the application it is executed. It will hold&lt;br /&gt;
the final result of the calculation of PI in this example.  Shared scalar variables are&lt;br /&gt;
singular and always reside in the shared memory of THREAD 0 in UPC.&lt;br /&gt;
&lt;br /&gt;
Next, a shared one dimensional array &#039;&#039;local_pi&#039;&#039; with a block size of one (1) and a size&lt;br /&gt;
of THREADS is declared. The THREADS macro is always set to the number of processors&lt;br /&gt;
(UPC threads) requested by the job at runtime. All elements in this shared array are accessible&lt;br /&gt;
by all THREADS allocated to the job. The block size of one means that array elements are&lt;br /&gt;
distributed, one-per-thread, across the logically Partitioned Global Address Space (PGAS)&lt;br /&gt;
of this parallel application. One is the default block size for shared arrays, but other&lt;br /&gt;
sizes are possible.&lt;br /&gt;
&lt;br /&gt;
Finally, a pointer to a special shared scalar variable to be used as a lock is declared.&lt;br /&gt;
Because UPC defines both a shared and private memory spaces for each program image&lt;br /&gt;
or THREAD, it must support four classes of pointers:  private pointers to private, &lt;br /&gt;
private pointers to shared, shared pointers to private, and shared pointers to shared.&lt;br /&gt;
The pointer declared here is a shared pointer to shared which makes the lock&#039;s memory&lt;br /&gt;
location available to all threads.  In the body of the code, the lock&#039;s memory is allocated&lt;br /&gt;
and placed in the unlocked state with the call to &#039;&#039;upc_all_lock_alloc()&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Next, each thread initializes its piece of the shared array &#039;&#039;local_pi&#039;&#039; to zero with the&lt;br /&gt;
help of the MYTHREAD macro, which contains the thread identifier of the particular&lt;br /&gt;
thread that does the assignment.  In this case, each UPC thread initializes only the &lt;br /&gt;
part of the share array the is in its portion of shared PGAS memory.  The standard C&lt;br /&gt;
for-loop that follows divides the work of integration among the different UPC threads&lt;br /&gt;
so that each thread works on its only local portion of the shared array &#039;&#039;local_pi&#039;&#039;.  UPC&lt;br /&gt;
provides a work-sharing loop construct &#039;&#039;upc_forall&#039;&#039; that accomplishes the same&lt;br /&gt;
thing implicitly. &lt;br /&gt;
&lt;br /&gt;
Processor-local (UPC thread) partial sums are then summed globally and in a memory&lt;br /&gt;
consistent fashion with the help of the UPC lock function &#039;&#039;upc_lock()&#039;&#039; and &#039;&#039;upc_unlock()&#039;&#039;.&lt;br /&gt;
Without the explicit locking code here, there would be nothing to prevent two UPC&lt;br /&gt;
threads from reading the most current value in memory before it had been updated &lt;br /&gt;
with a latest partial sum.  This would produce an incorrect under-summing of the&lt;br /&gt;
result.  Next, a &#039;&#039;upc_barrier&#039;&#039; ensures all the summing is completed before the result&lt;br /&gt;
is printed and the lock&#039;s memory is freed.  &lt;br /&gt;
&lt;br /&gt;
This example includes some of the more important UPC PGAS-parallel extensions to&lt;br /&gt;
the C programming language, but a complete review of the UPC parallel extension to&lt;br /&gt;
C is provide in the web documentation referenced above.&lt;br /&gt;
&lt;br /&gt;
As suggested above, the CUNY HPC Center supports UPC on both its Cray XE6 system,&lt;br /&gt;
SALK, (which has custom hardware and software support for the UPC and CAF PGAS&lt;br /&gt;
languages) and on its other systems where Berkeley UPC is installed and uses the&lt;br /&gt;
GASNET library to support the PGAS memory abstraction on top of a number standard&lt;br /&gt;
underlying cluster interconnects.  At the HPC Center this would include Ethernet and/or&lt;br /&gt;
InfiniBand depending on the CUNY HPC Center cluster system being used.&lt;br /&gt;
&lt;br /&gt;
Here, the process of compiling a UPC program both for Cray&#039;s UPC on SALK, and for&lt;br /&gt;
Berkeley UPC on the HPC Center&#039;s other systems is described.  On the Cray, compiling a&lt;br /&gt;
UPC program, such as the example above, simply requires adding an option to the Cray&lt;br /&gt;
C compiler, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk:&lt;br /&gt;
salk: module load PrgEnv-cray&lt;br /&gt;
salk:&lt;br /&gt;
salk: cc -h upc -o int_PI.exe int_PI.c&lt;br /&gt;
salk:&lt;br /&gt;
salk: ls&lt;br /&gt;
int_PI.exe&lt;br /&gt;
salk:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
First, the Cray programming environment is loaded using the &#039;module&#039; command; then&lt;br /&gt;
the Cray compiler is invoked with the &#039;&#039;-h upc&#039;&#039; option to include the UPC elements of the&lt;br /&gt;
compiler.  The result is an executable that can be run with Cray&#039;s parallel job initiation &lt;br /&gt;
command &#039;aprun&#039;.   This compilation was done in dynamic mode so that any number of&lt;br /&gt;
processors (UPC threads) can be selected at run time using the &#039;&#039;-n ##&#039;&#039; option to &#039;aprun&#039;. &lt;br /&gt;
The required form the &#039;aprun&#039; line is shown below in the section on UPC program SLURM&lt;br /&gt;
job submission.  &lt;br /&gt;
&lt;br /&gt;
To compile for a fixed number of processors (a static compile) or UPC threads use&lt;br /&gt;
the &#039;&#039;-X ##&#039;&#039; option on the Cray, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk:&lt;br /&gt;
salk: cc -X 32 -h upc -o int_PI_32.exe int_PI.c&lt;br /&gt;
salk:&lt;br /&gt;
salk: ls&lt;br /&gt;
int_PI_32.exe&lt;br /&gt;
salk:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example, the PI example program has been compiled for 32 processors or UPC threads,&lt;br /&gt;
and therefore must be invoked with that many processors on the &#039;aprun&#039; command line:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aprun -n 32 -N 16 ./int_PI_32.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On the HPC Center&#039;s other systems, compilation is conceptually similar, but uses the Berkeley&lt;br /&gt;
UPC compiler driver &#039;upcc&#039;.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
andy:&lt;br /&gt;
andy: upcc  -o int_PI.exe int_PI.c&lt;br /&gt;
andy:&lt;br /&gt;
andy: ls&lt;br /&gt;
int_PI.exe&lt;br /&gt;
andy:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, the &#039;upcc&#039; compiler driver from Berkeley allows for static compilations using&lt;br /&gt;
its &#039;&#039;-T ##&#039;&#039; option:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
andy:&lt;br /&gt;
andy: upcc -T 32  -o int_PI_32.exe int_PI.c&lt;br /&gt;
andy:&lt;br /&gt;
andy: ls&lt;br /&gt;
int_PI_32.exe&lt;br /&gt;
andy:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Berkeley UPC compiler driver has a number of other useful options that are described&lt;br /&gt;
in its &#039;man&#039; page.  In particular, the &#039;&#039;-network=&#039;&#039; option will target the executable for the&lt;br /&gt;
GASNET communication &#039;&#039;conduit&#039;&#039; of the user&#039;s choosing on systems that have multiple&lt;br /&gt;
interconnects (Ethernet and InfiniBand, for instance) or target the default version of MPI&lt;br /&gt;
as the communication layer.  Type &#039;man upcc&#039; for details. &lt;br /&gt;
&lt;br /&gt;
In general, users can expect better performance from Cray&#039;s UPC compiler on SALK, but&lt;br /&gt;
having UPC on the HPC Center&#039;s traditional cluster architectures provides another location&lt;br /&gt;
for development and supports the wider use of UPC and an alternative to MPI.  In theory,&lt;br /&gt;
well-written UPC code should perform as well as MPI on a standard cluster, while reducing&lt;br /&gt;
the number of lines of code to achieve that performance.  In practice, this is still not always&lt;br /&gt;
the case; more development and hardware support is still needed to get the best performance&lt;br /&gt;
from PGAS languages on commodity cluster environments.&lt;br /&gt;
&lt;br /&gt;
=== Submitting UPC Parallel Programs Using SLURM ===&lt;br /&gt;
&lt;br /&gt;
Finally, two SLURM scripts that will run the above UPC executable.  First, one for the Cray XE6&lt;br /&gt;
system, SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N UPC_example&lt;br /&gt;
#SLURM -l select=64:ncpus=1:mem=2000mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -o int_PI.out&lt;br /&gt;
#SLURM -e int_PI.err&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
aprun -n 64 -N 16 ./int_PI2.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the dynamically compiled executable is run on 64 Cray XE6 cores (-n 64), 16 cores&lt;br /&gt;
packed to a physical node (-N 16).  More detail is presented below on SLURM job submission on&lt;br /&gt;
the Cray and the use of of the Cray&#039;s &#039;aprun&#039; command.  On the Cray &#039;man aprun&#039; provides&lt;br /&gt;
an important and detailed account of the &#039;aprun&#039; command-line options and their function.&lt;br /&gt;
One cannot fully understand job control on the Cray (SALK) without understanding &#039;aprun&#039;.&lt;br /&gt;
&lt;br /&gt;
A similar SLURM script for the example code compiled dynamically (or statically) for 32 processors&lt;br /&gt;
with the Berkeley UPC compiler (upcc) for execution on one of the HPC Center&#039;s more traditional&lt;br /&gt;
HPC cluster looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N UPC_example&lt;br /&gt;
#SLURM -l select=32:ncpus=1:mem=1920mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -o int_PI.out&lt;br /&gt;
#SLURM -e int_PI.err&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
upcrun -n 32 ./int_PI2.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here, the SLURM script requests 32 processors (UPC threads).  It uses the &#039;upcrun&#039; command to&lt;br /&gt;
setup the Berkeley UPC runtime environment, engage the 32 processors, and initiate execution.&lt;br /&gt;
Please type &#039;man upcrun&#039; for details on the &#039;upcrun&#039; command and its options.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=BEST&amp;diff=136</id>
		<title>BEST</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=BEST&amp;diff=136"/>
		<updated>2022-10-27T19:49:20Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Currently, BEST is available on ANDY at the CUNY HPC Center.&lt;br /&gt;
&lt;br /&gt;
To run BEST, first a NEXUS-formatted, DNA sequence comparison input file (e.g. a &#039;.nex&#039; file)&lt;br /&gt;
must be created using MrBayes.  See the section on MrBayes below for this.  Here is an&lt;br /&gt;
example NEXUS input file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#NEXUS&lt;br /&gt;
&lt;br /&gt;
begin data;&lt;br /&gt;
   dimensions ntax=17 nchar=432;&lt;br /&gt;
   format datatype=dna missing=?;&lt;br /&gt;
   matrix&lt;br /&gt;
   human       ctgactcctgaggagaagtctgccgttactgccctgtggggcaaggtgaacgtggatgaagttggtggtgaggccctgggcaggctgctggtggtctacccttggacccagaggttctttgagtcctttggggatctgtccactcctgatgctgttatgggcaaccctaaggtgaaggctcatggcaagaaagtgctcggtgcctttagtgatggcctggctcacctggacaacctcaagggcacctttgccacactgagtgagctgcactgtgacaagctgcacgtggatcctgagaacttcaggctcctgggcaacgtgctggtctgtgtgctggcccatcactttggcaaagaattcaccccaccagtgcaggctgcctatcagaaagtggtggctggtgtggctaatgccctggcccacaagtatcac&lt;br /&gt;
   tarsier     ctgactgctgaagagaaggccgccgtcactgccctgtggggcaaggtagacgtggaagatgttggtggtgaggccctgggcaggctgctggtcgtctacccatggacccagaggttctttgactcctttggggacctgtccactcctgccgctgttatgagcaatgctaaggtcaaggcccatggcaaaaaggtgctgaacgcctttagtgacggcatggctcatctggacaacctcaagggcacctttgctaagctgagtgagctgcactgtgacaaattgcacgtggatcctgagaatttcaggctcttgggcaatgtgctggtgtgtgtgctggcccaccactttggcaaagaattcaccccgcaggttcaggctgcctatcagaaggtggtggctggtgtggctactgccttggctcacaagtaccac&lt;br /&gt;
   bushbaby    ctgactcctgatgagaagaatgccgtttgtgccctgtggggcaaggtgaatgtggaagaagttggtggtgaggccctgggcaggctgctggttgtctacccatggacccagaggttctttgactcctttggggacctgtcctctccttctgctgttatgggcaaccctaaagtgaaggcccacggcaagaaggtgctgagtgcctttagcgagggcctgaatcacctggacaacctcaagggcacctttgctaagctgagtgagctgcattgtgacaagctgcacgtggaccctgagaacttcaggctcctgggcaacgtgctggtggttgtcctggctcaccactttggcaaggatttcaccccacaggtgcaggctgcctatcagaaggtggtggctggtgtggctactgccctggctcacaaataccac&lt;br /&gt;
   hare        ctgtccggtgaggagaagtctgcggtcactgccctgtggggcaaggtgaatgtggaagaagttggtggtgagaccctgggcaggctgctggttgtctacccatggacccagaggttcttcgagtcctttggggacctgtccactgcttctgctgttatgggcaaccctaaggtgaaggctcatggcaagaaggtgctggctgccttcagtgagggtctgagtcacctggacaacctcaaaggcaccttcgctaagctgagtgaactgcattgtgacaagctgcacgtggatcctgagaacttcaggctcctgggcaacgtgctggttattgtgctgtctcatcactttggcaaagaattcactcctcaggtgcaggctgcctatcagaaggtggtggctggtgtggccaatgccctggctcacaaataccac&lt;br /&gt;
   rabbit      ctgtccagtgaggagaagtctgcggtcactgccctgtggggcaaggtgaatgtggaagaagttggtggtgaggccctgggcaggctgctggttgtctacccatggacccagaggttcttcgagtcctttggggacctgtcctctgcaaatgctgttatgaacaatcctaaggtgaaggctcatggcaagaaggtgctggctgccttcagtgagggtctgagtcacctggacaacctcaaaggcacctttgctaagctgagtgaactgcactgtgacaagctgcacgtggatcctgagaacttcaggctcctgggcaacgtgctggttattgtgctgtctcatcattttggcaaagaattcactcctcaggtgcaggctgcctatcagaaggtggtggctggtgtggccaatgccctggctcacaaataccac&lt;br /&gt;
   cow         ctgactgctgaggagaaggctgccgtcaccgccttttggggcaaggtgaaagtggatgaagttggtggtgaggccctgggcaggctgctggttgtctacccctggactcagaggttctttgagtcctttggggacttgtccactgctgatgctgttatgaacaaccctaaggtgaaggcccatggcaagaaggtgctagattcctttagtaatggcatgaagcatctcgatgacctcaagggcacctttgctgcgctgagtgagctgcactgtgataagctgcatgtggatcctgagaacttcaagctcctgggcaacgtgctagtggttgtgctggctcgcaattttggcaaggaattcaccccggtgctgcaggctgactttcagaaggtggtggctggtgtggccaatgccctggcccacagatatcat&lt;br /&gt;
   sheep       ctgactgctgaggagaaggctgccgtcaccggcttctggggcaaggtgaaagtggatgaagttggtgctgaggccctgggcaggctgctggttgtctacccctggactcagaggttctttgagcactttggggacttgtccaatgctgatgctgttatgaacaaccctaaggtgaaggcccatggcaagaaggtgctagactcctttagtaacggcatgaagcatctcgatgacctcaagggcacctttgctcagctgagtgagctgcactgtgataagctgcacgtggatcctgagaacttcaggctcctgggcaacgtgctggtggttgtgctggctcgccaccatggcaatgaattcaccccggtgctgcaggctgactttcagaaggtggtggctggtgttgccaatgccctggcccacaaatatcac&lt;br /&gt;
   pig         ctgtctgctgaggagaaggaggccgtcctcggcctgtggggcaaagtgaatgtggacgaagttggtggtgaggccctgggcaggctgctggttgtctacccctggactcagaggttcttcgagtcctttggggacctgtccaatgccgatgccgtcatgggcaatcccaaggtgaaggcccacggcaagaaggtgctccagtccttcagtgacggcctgaaacatctcgacaacctcaagggcacctttgctaagctgagcgagctgcactgtgaccagctgcacgtggatcctgagaacttcaggctcctgggcaacgtgatagtggttgttctggctcgccgccttggccatgacttcaacccgaatgtgcaggctgcttttcagaaggtggtggctggtgttgctaatgccctggcccacaagtaccac&lt;br /&gt;
   elephseal   ttgacggcggaggagaagtctgccgtcacctccctgtggggcaaagtgaaggtggatgaagttggtggtgaagccctgggcaggctgctggttgtctacccctggactcagaggttctttgactcctttggggacctgtcctctcctaatgctattatgagcaaccccaaggtcaaggcccatggcaagaaggtgctgaattcctttagtgatggcctgaagaatctggacaacctcaagggcacctttgctaagctcagtgagctgcactgtgaccagctgcatgtggatcccgagaacttcaagctcctgggcaatgtgctggtgtgtgtgctggcccgccactttggcaaggaattcaccccacagatgcagggtgcctttcagaaggtggtagctggtgtggccaatgccctcgcccacaaatatcac&lt;br /&gt;
   rat         ctaactgatgctgagaaggctgctgttaatgccctgtggggaaaggtgaaccctgatgatgttggtggcgaggccctgggcaggctgctggttgtctacccttggacccagaggtactttgatagctttggggacctgtcctctgcctctgctatcatgggtaaccctaaggtgaaggcccatggcaagaaggtgataaacgccttcaatgatggcctgaaacacttggacaacctcaagggcacctttgctcatctgagtgaactccactgtgacaagctgcatgtggatcctgagaacttcaggctcctgggcaatatgattgtgattgtgttgggccaccacctgggcaaggaattcaccccctgtgcacaggctgccttccagaaggtggtggctggagtggccagtgccctggctcacaagtaccac&lt;br /&gt;
   mouse       ctgactgatgctgagaagtctgctgtctcttgcctgtgggcaaaggtgaaccccgatgaagttggtggtgaggccctgggcaggctgctggttgtctacccttggacccagcggtactttgatagctttggagacctatcctctgcctctgctatcatgggtaatcccaaggtgaaggcccatggcaaaaaggtgataactgcctttaacgagggcctgaaaaacctggacaacctcaagggcacctttgccagcctcagtgagctccactgtgacaagctgcatgtggatcctgagaacttcaggctcctaggcaatgcgatcgtgattgtgctgggccaccacctgggcaaggatttcacccctgctgcacaggctgccttccagaaggtggtggctggagtggccactgccctggctcacaagtaccac&lt;br /&gt;
   hamster     ctgactgatgctgagaaggcccttgtcactggcctgtggggaaaggtgaacgccgatgcagttggcgctgaggccctgggcaggttgctggttgtctacccttggacccagaggttctttgaacactttggagacctgtctctgccagttgctgtcatgaataacccccaggtgaaggcccatggcaagaaggtgatccactccttcgctgatggcctgaaacacctggacaacctgaagggcgccttttccagcctgagtgagctccactgtgacaagctgcacgtggatcctgagaacttcaagctcctgggcaatatgatcatcattgtgctgatccacgacctgggcaaggacttcactcccagtgcacagtctgcctttcataaggtggtggctggtgtggccaatgccctggctcacaagtaccac&lt;br /&gt;
   marsupial   ttgacttctgaggagaagaactgcatcactaccatctggtctaaggtgcaggttgaccagactggtggtgaggcccttggcaggatgctcgttgtctacccctggaccaccaggttttttgggagctttggtgatctgtcctctcctggcgctgtcatgtcaaattctaaggttcaagcccatggtgctaaggtgttgacctccttcggtgaagcagtcaagcatttggacaacctgaagggtacttatgccaagttgagtgagctccactgtgacaagctgcatgtggaccctgagaacttcaagatgctggggaatatcattgtgatctgcctggctgagcactttggcaaggattttactcctgaatgtcaggttgcttggcagaagctcgtggctggagttgcccatgccctggcccacaagtaccac&lt;br /&gt;
   duck        tggacagccgaggagaagcagctcatcaccggcctctggggcaaggtcaatgtggccgactgtggagctgaggccctggccaggctgctgatcgtctacccctggacccagaggttcttcgcctccttcgggaacctgtccagccccactgccatccttggcaaccccatggtccgtgcccatggcaagaaagtgctcacctccttcggagatgctgtgaagaacctggacaacatcaagaacaccttcgcccagctgtccgagctgcactgcgacaagctgcacgtggaccctgagaacttcaggctcctgggtgacatcctcatcatcgtcctggccgcccacttcaccaaggatttcactcctgactgccaggccgcctggcagaagctggtccgcgtggtggcccacgctctggcccgcaagtaccac&lt;br /&gt;
   chicken     tggactgctgaggagaagcagctcatcaccggcctctggggcaaggtcaatgtggccgaatgtggggccgaagccctggccaggctgctgatcgtctacccctggacccagaggttctttgcgtcctttgggaacctctccagccccactgccatccttggcaaccccatggtccgcgcccacggcaagaaagtgctcacctcctttggggatgctgtgaagaacctggacaacatcaagaacaccttctcccaactgtccgaactgcattgtgacaagctgcatgtggaccccgagaacttcaggctcctgggtgacatcctcatcattgtcctggccgcccacttcagcaaggacttcactcctgaatgccaggctgcctggcagaagctggtccgcgtggtggcccatgccctggctcgcaagtaccac&lt;br /&gt;
   xenlaev     tggacagctgaagagaaggccgccatcacttctgtatggcagaaggtcaatgtagaacatgatggccatgatgccctgggcaggctgctgattgtgtacccctggacccagagatacttcagtaactttggaaacctctccaattcagctgctgttgctggaaatgccaaggttcaagcccatggcaagaaggttctttcagctgttggcaatgccattagccatattgacagtgtgaagtcctctctccaacaactcagtaagatccatgccactgaactgtttgtggaccctgagaactttaagcgttttggtggagttctggtcattgtcttgggtgccaaactgggaactgccttcactcctaaagttcaggctgcttgggagaaattcattgcagttttggttgatggtcttagccagggctataac&lt;br /&gt;
   xentrop     tggacagctgaagaaaaagcaaccattgcttctgtgtgggggaaagtcgacattgaacaggatggccatgatgcattatccaggctgctggttgtttatccctggactcagaggtacttcagcagttttggaaacctctccaatgtctccgctgtctctggaaatgtcaaggttaaagcccatggaaataaagtcctgtcagctgttggcagtgcaatccagcatctggatgatgtgaagagccaccttaaaggtcttagcaagagccatgctgaggatcttcatgtggatcccgaaaacttcaagcgccttgcggatgttctggtgatcgttctggctgccaaacttggatctgccttcactccccaagtccaagctgtctgggagaagctcaatgcaactctggtggctgctcttagccatggctacttc&lt;br /&gt;
   ;&lt;br /&gt;
end;&lt;br /&gt;
&lt;br /&gt;
begin mrbayes;&lt;br /&gt;
   charset non_coding = 1-90 358-432;&lt;br /&gt;
   charset coding     = 91-357;&lt;br /&gt;
   partition region = 2:non_coding,coding;&lt;br /&gt;
   set partition = region;&lt;br /&gt;
   lset applyto=(2) nucmodel=codon;&lt;br /&gt;
   prset ratepr=variable;&lt;br /&gt;
   mcmc ngen=5000 nchains=1 samplefreq=10;&lt;br /&gt;
end;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next, a SLURM batch script must be created to run your job.  The first script below shows a MPI parallel script&lt;br /&gt;
for the above &#039;.nex&#039; input file.  Note that the number of processors that can be used by the job is limited to&lt;br /&gt;
the number of chains in the input file.  Here, we have just 2 chains and therefore can only request 2 processors.&lt;br /&gt;
If you make the mistake of asking for more processors than input file chains, you will get the following error&lt;br /&gt;
message:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
      The number of chains must be at least as great&lt;br /&gt;
      as the number of processors (in this case 4)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also, to include all required environmental variables and the path to the BEST executable run the modules load&lt;br /&gt;
command (the modules utility is discussed in detail above):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load best&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the MPI parallel SLURM batch script for BEST that request 2 processors, one for each chain in the input file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name BEST_parallel&lt;br /&gt;
#SBATCH --nodes=2&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Use &#039;mpirun&#039; and point to the MPI parallel executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin BEST Parallel Run ...&amp;quot;&lt;br /&gt;
mpirun -np mbbest ./bglobin.nex  &amp;gt; best_mpi.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   BEST Parallel Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This script can be dropped into a file (say &#039;best_mpi.job) on ANDY and run with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub best_mpi.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It should take less five minutes to run and will produce SLURM output and error files beginning&lt;br /&gt;
with the job name &#039;BEST_parallel&#039;.  The primary BEAST application results will be written into&lt;br /&gt;
the user-specified file at the end of the BEST command line after the greater-than sign. Here&lt;br /&gt;
it is named &#039;best_mpi.out&#039;.  The expression &#039;2&amp;gt;&amp;amp;1&#039; combines Unix standard output from the&lt;br /&gt;
program with Unix standard error.  Users should always explicitly specify the name of the&lt;br /&gt;
application&#039;s output file in this way to ensure that it is written directly into the user&#039;s working&lt;br /&gt;
directory which has much more disk space than the SLURM spool directory on /var.&lt;br /&gt;
&lt;br /&gt;
Details on the meaning of the SLURM script are covered below in the SLURM section.  The most important&lt;br /&gt;
lines are the &#039;#SBATCH --nodes ntasks=1 mem=2880&#039;.  The&lt;br /&gt;
first instructs SLURM to select 2 resource &#039;chunks&#039; each with 1 processor (core) and 2,880 MBs of&lt;br /&gt;
memory in it for the job.  The second instructs SLURM to place this job wherever the least used&lt;br /&gt;
resources are found (freely).  The master compute node that it finally selects to run your job&lt;br /&gt;
will be printed in the SLURM output file by the &#039;hostname&#039; command.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center also provides a serial version of BEST.  A SLURM batch script for running the &lt;br /&gt;
serial version of BEST follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name BEST_serial&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Just point to the serial executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin BEST Serial Run ...&amp;quot;&lt;br /&gt;
mbbest_serial ./bglobin.nex &amp;gt; best_ser.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   BEST Serial Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications_Index&amp;diff=135</id>
		<title>Applications Index</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications_Index&amp;diff=135"/>
		<updated>2022-10-27T19:49:18Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div class=&amp;quot;noautonum&amp;quot;&amp;gt;__TOC__&amp;lt;/div&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;The purpose of this page is to provide an all-inclusive, alphabetical list of available applications, tools, and libraries.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For information about using modules to run your applications go to [[Using Modules To Run Your Applications]]. For a list of applications sorted by academic relevance go to [http://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications_Index/Academic_relevance Academic Relevance].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== A == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for genomics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== B == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from genetic data, using differences in allele frequencies between populations.  BayeScan is based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an island model in which subpopulation allele frequencies are correlated through a common migrant gene pool from which they differ in varying degrees.  The difference in allele frequency between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html] and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== C == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== D == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== E == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N &amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SLURM -l select=1:ncpus=2 &lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== F == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== G == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:* Karle under /usr/bin/gnuplot&lt;br /&gt;
:* Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/  gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GenomePop2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== H ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&lt;br /&gt;
HOOMD&#039;s object-oriented design patterns make it both versatile and expandable. Various types of potentials, integration methods&lt;br /&gt;
and file formats are currently supported, and more are added with each release. The code is available and open source, so anyone&lt;br /&gt;
can write a plugin or change the source to add additional functionality.  Simulations are configured and run using simple python&lt;br /&gt;
scripts, allowing complete control over the force field choice, integrator, all parameters, how many time steps are run, etc.&lt;br /&gt;
The scripting system is designed to be as simple as possible to the non-programmer.&lt;br /&gt;
&lt;br /&gt;
The HOOMD development effort is led by the Glotzer group at the University of Michigan, but many groups from different universities&lt;br /&gt;
have contributed code that is now part of the HOOMD main package, see the credits page for the full list. The HOOMD website and&lt;br /&gt;
documentation are available here [http://codeblue.umich.edu/hoomd-blue/about.html]. More information about our installation can be&lt;br /&gt;
found here [[HOOMD]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== I ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== J ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== L ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== M ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
*Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== N ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== O == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenSees&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSees, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSees is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== P == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
* Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Installing Python packages&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Users may install python packages/modules in their own space.  Many packages available in Python repositories can be installed easily with PIP manager, which is available  in any of Anaconda and Miniconda builds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Users must remember that using PIP without first loading the module for python will cause the installed modules to match system python on login node only. However the python interpreter available (after login module) on all nodes is installed in /share/usr/compilers/python space. Thus when installing packages in user space it is very important to follow the procedure outlined below. The given example demonstrates how users can install package &amp;quot;guppy&amp;quot; in their own space:&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Anaconda build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/2.7.13_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Anaconda build&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/3.6.0_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Miniconda&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda2&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Miniconda 3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda3&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check if the package is properly installed type:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip list | grep guppy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Q == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== R == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot; &amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== S == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:* data entry, retrieval, management, and mining&lt;br /&gt;
:* report writing and graphics&lt;br /&gt;
:* statistical analysis&lt;br /&gt;
:* business planning, forecasting, and decision support&lt;br /&gt;
:* operations research and project management&lt;br /&gt;
:* quality improvement&lt;br /&gt;
:* applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== T == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== U == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== V == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== W == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== X == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=134</id>
		<title>Applications</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications&amp;diff=134"/>
		<updated>2022-10-27T19:48:48Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div class=&amp;quot;noautonum&amp;quot;&amp;gt;__NOTOC__&amp;lt;/div&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;This is an index of available applications sorted by their academic relevance, as well as alphabetically.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For information about using modules to run your applications go to [[Using Modules To Run Your Applications]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class= &amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Computational Physics and Computational Chemistry == &lt;br /&gt;
Applications in this section use classical mechanics, quantum mechanics and thermodynamics and are applied in simulation studies of fundamental properties of atoms, molecules, and chemical reactions.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Computational Biology == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for ‘omics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from &lt;br /&gt;
genetic data, using differences in allele frequencies between populations.  BayeScan is &lt;br /&gt;
based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an&lt;br /&gt;
island model in which subpopulation allele frequencies are correlated through a common &lt;br /&gt;
migrant gene pool from which they differ in varying degrees.  The difference in allele frequency &lt;br /&gt;
between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios &lt;br /&gt;
where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html]&lt;br /&gt;
and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our &lt;br /&gt;
installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the &lt;br /&gt;
European Bioinformatics Institute, Cambridge, UK.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[VELVET]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Computational Genomics, Proteonics, Microbiomics, Genetics ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --&amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=2&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GENOMEPOP2&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Math, Engineering, Computer Science == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenSEES&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSEES, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSEES is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Economics, Business, Statistics, Analytics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot; &amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R-devel&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
R is a language and environment for statistical computing and graphics. R-devel provides both core R userspace and all R development components.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:* data entry, retrieval, management, and mining&lt;br /&gt;
:* report writing and graphics&lt;br /&gt;
:* statistical analysis&lt;br /&gt;
:* business planning, forecasting, and decision support&lt;br /&gt;
:* operations research and project management&lt;br /&gt;
:* quality improvement&lt;br /&gt;
:* applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== General Development Systems ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Coming soon.&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Tools, Libraries, Compilers ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:* Karle under /usr/bin/gnuplot&lt;br /&gt;
:* Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/  gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
*Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
* Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Alphabetical List ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== A == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ADCIRC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in two and three dimensions.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation. In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model. Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine operations. For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html]. Details on using SWAN with ADCIRC can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net]. More information about use and set-up can be found here [[ADCIRC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AMBER (Assisted Model Building with Energy Refinement)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Amber is the collective name for a suite of programs for classical bio-molecular simulations. &lt;br /&gt;
The name &amp;quot;Amber&amp;quot; also denotes the family of potentials (force fields) used with Amber &lt;br /&gt;
software. Here we discuss only simulation packages, but not the force fields or free tools&lt;br /&gt;
available via AmberTools package. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/amber&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ANVIO&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Anvio is a tool for an analysis and visualization platform for genomics data. Anvio allows various types of workflows to be &lt;br /&gt;
established. [[ANVIO]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUGUSTUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. Augustus is a gene-finding software based on Hidden Markov Models (HMMs), &lt;br /&gt;
described in papers by Stanke and Waack (2003) and Stanke et al (2006) and Stanke et al (2006b) and Stanke et al (2008).The local version of the program is installed on &lt;br /&gt;
Penzias. More information can be found here: [[AUGUSTUS]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;AUTODOCK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
AutoDock is a suite of automated docking tools.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.  AutoDock actually consists of two main programs: &#039;&#039;autodock&#039;&#039; itself performs the docking of the ligand to a set of grids describing the target protein; &#039;&#039;autogrid&#039;&#039; pre-calculates these grids. More information about the software may be found at the autodock web-page [http://autodock.scripps.edu/]. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/autodock&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== B == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAMOVA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Bamova is a package used to do genetic analysis of a wide range of organisms on the basis of &lt;br /&gt;
next-generation sequence data. The software implements Bayesian Analysis of Molecular Variance and &lt;br /&gt;
different likelihood models for three different types of molecular data &lt;br /&gt;
(including two models for high throughput sequence data). For more detail on BAMOVA please visit the BAMOVA web site [http://www.uwyo.edu/buerkle/software/bamova] and manual &lt;br /&gt;
here [http://www.uwyo.edu/buerkle/software/bamova/bamova_manual_1.0.pdf]. Further information can also be found here [[BAMOVA]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BAYESCAN&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BAYESCAN is Population Genomics Software package.  It identifies outlier loci and is applicable &lt;br /&gt;
to both, dominant and codominant data. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;This program, BayeScan aims at identifying candidate loci under natural selection from genetic data, using differences in allele frequencies between populations.  BayeScan is based on the multinomial-Dirichlet model.  One of the scenarios covered consists of an island model in which subpopulation allele frequencies are correlated through a common migrant gene pool from which they differ in varying degrees.  The difference in allele frequency between this common gene pool and each subpopulation is measured by a subpopulation-&lt;br /&gt;
specific  FST coefficient.  Therefore, this formulation can consider realistic ecological scenarios where the effective size and the immigration rate may differ among subpopulations.&lt;br /&gt;
More detailed information on Bayescan can be found at the web site here [http://cmpg.unibe.ch/software/bayescan/index.html] and in the manual here [http://cmpg.unibe.ch/software/bayescan/files/BayeScan2.1_manual.pdf]. More information about our installation can be found here [[BAYESCAN]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEAST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The package implements a family of Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetic inference, divergence time dating, coalescent analysis, phylogeography and related molecular evolutionary analyses. It is a cross-platform Java program for Bayesian MCMC analysis of molecular sequences. It is entirely orientated towards rooted, time-measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies, but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology.  BEAST uses MCMC to average over tree space, so that each tree is weighted proportional to its posterior probability. The distribution includes a simple to use user-interface program called &#039;BEAUti&#039; for setting up standard analyses and a suite of programs for analysing the results. For more detail on BEAST (and BEAUTi) please visit the BEAST web site [http://beast.bio.ed.ac.uk/Main_Page]. More information about our installation can be found here [http://wiki.csi.cuny.edu/cunyhpc/index.php/Template:BEAST BEAST].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BEST&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BEST is an application aimed to estimate gene trees and the species tree from multilocus sequences.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program uses information from multiple gene trees and performs a Bayesian analysis to estimate the &lt;br /&gt;
topology of the species tree, divergence times and population sizes.  &lt;br /&gt;
&lt;br /&gt;
It provides a new approach for estimating the mutation-rate-&lt;br /&gt;
based, phylogenetic relationships among species.  Its method accounts for deep coalescence,&lt;br /&gt;
but not for other complicating issues such as horizontal transfer or gene duplication. The&lt;br /&gt;
program works in conjunction within the popular Bayesian phylogenetics package, MrBayes&lt;br /&gt;
(Ronquist and Huelsenbeck, Bioinformatics, 2003).  BEST&#039;s parameters are defined using&lt;br /&gt;
the &#039;prset&#039; command from MrBayes.  Details on BEST&#039;s capabilities and options are avialable&lt;br /&gt;
at the BEST web site here [http://www.stat.osu.edu/~dkp/BEST/introduction]. More information&lt;br /&gt;
about our installation is available here [[BEST]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BOWTIE2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
BOWTIE2 indexes the genome with an FM Index to keep its memory&lt;br /&gt;
footprint small: for the human genome, its memory footprint is typically around 3.2 GB. BOWTIE2 supports gapped,&lt;br /&gt;
local, and paired-end alignment modes. BOWTIE2 is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, CUFFLINKS, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center. Additional information can be found at the BOWTIE2 home page here [http://bowtie-bio.sourceforge.net/bowtie2/index.shtml].&lt;br /&gt;
Information about our installation can be found here [[BOWTIE2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BPP2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
BPP2 uses a Bayesian modeling approach to generate the posterior probabilities of species assignments taking into account uncertainties due to unknown gene trees and the ancestral coalescent process. For tractability, it relies on a user-specified guide tree to avoid integrating over all possible species delimitations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Additional information can be found at the download site here [http://abacus.gene.ucl.ac.uk/software.html]. More information about our installation can be found here [[BPP2]].&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;BROWNIE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
BROWNIE is a program for analyzing rates of continuous character evolution and looking for substantial rate differences in different parts of a tree using likelihood&lt;br /&gt;
ratio tests and Akaike Information Criterion (AIC) statistics. It now also implements many other methods for examining trait evolution and methods for doing species&lt;br /&gt;
delimitation. More information about our installation can be found here [[BROWNIE]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== C == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CGAL&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Computational Geometry Algorithms Library (CGAL), offers data structures and algorithms.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt; &lt;br /&gt;
Examples of these are triangulations (2D constrained triangulations, and Delaunay triangulations and periodic triangulations in &lt;br /&gt;
2D and 3D), Voronoi diagrams (for 2D and 3D points, 2D additively weighted Voronoi diagrams, and segment Voronoi diagrams), polygons &lt;br /&gt;
(Boolean operations, offsets, straight skeleton), polyhedra (Boolean operations), arrangements of curves and their applications &lt;br /&gt;
(2D and 3D envelopes, Minkowski sums), mesh generation (2D Delaunay mesh generation and 3D surface and volume mesh &lt;br /&gt;
generation, skin surfaces), geometry processing (surface mesh simplification, subdivision and parameterization, as well as &lt;br /&gt;
estimation of local differential properties, and approximation of ridges and umbilics), alpha shapes, convex hull &lt;br /&gt;
algorithms (in 2D, 3D and dD), search structures (kd trees for nearest neighbor search, and range and segment trees), &lt;br /&gt;
interpolation (natural neighbor interpolation and placement of streamlines), shape analysis, fitting, and distances &lt;br /&gt;
(smallest enclosing sphere of points or spheres, smallest enclosing ellipsoid of points, principal component analysis), and &lt;br /&gt;
kinetic data structures.&lt;br /&gt;
&lt;br /&gt;
The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
More information can be found here http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/CGAL. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CONSED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CONSED is a DNA sequence analysis finishing tool that provides sequence viewing, editing, alignment, and&lt;br /&gt;
assembly capabilities from a X Windows graphical user interface (GUI).  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It makes extensive use of other non-graphical&lt;br /&gt;
and underlying sequence analysis tools including PHRED, PHRAP, and CROSSMATCH that may also be used separately&lt;br /&gt;
and are described else where in this document.  It also includes a viewer called BAMVIEW.  The CONSED tool chain is&lt;br /&gt;
developed and maintained at the University of Washington and is described&lt;br /&gt;
more completely here [http://bozeman.mbt.washington.edu/consed/consed.html]&lt;br /&gt;
CONSED is provided at the CUNY HPC Center under an academic license that allows use, but not the copying or out&lt;br /&gt;
bound transfer of any of the executables or files distributed under this academic license.  The license is not &lt;br /&gt;
transferable in any way and users wishing to run the application at their own site must acquire a license directly&lt;br /&gt;
from the authors.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center supports CONSED version 23.0 for interactive use on KARLE.  CONSED 23.0 and the tool&lt;br /&gt;
chain described above is also installed on ANDY to allow for the batch use of underlying support tools mention above&lt;br /&gt;
and described in detail below.  In general, running GUI-based applications on ANDY&#039;s login node is discouraged.  There&lt;br /&gt;
should be little need to do this as KARLE is on the periphery of the CUNY HPC network making login there direct and&lt;br /&gt;
KARLE shares its HOME directory file system with ANDY making files created on either system immediately available on&lt;br /&gt;
the other.&lt;br /&gt;
&lt;br /&gt;
Rather than rewrite portions of the CONSEND manual here, users are directed to the manual&#039;s &amp;quot;Quick Tour&amp;quot; section&lt;br /&gt;
here [http://bozeman.mbt.washington.edu/consed/distributions/README.23.0.txt] and asked to walk through some&lt;br /&gt;
of the exercises after logging into KARLE.  If problems or questions come up, please post them to &amp;quot;hpchelp@csi.cuny.edu&amp;quot;.&lt;br /&gt;
The CONSED 23.0 distribution is installed on KARLE in the following directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/apps/consed/default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All the files in the distribution can be found there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CP2K&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological&lt;br /&gt;
systems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It provides a general framework for different methods such as e.g., density functional theory (DFT) using&lt;br /&gt;
a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. CP2K provides&lt;br /&gt;
state-of-the-art methods for efficient and accurate atomistic simulations. More information about our installation &lt;br /&gt;
can be found here [[CP2K]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;CUFFLINKS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
CUFFLINKS assembles transcripts, estimates their abundances, and tests for differential expression and regulation in&lt;br /&gt;
RNA-Seq samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts.&lt;br /&gt;
CUFFLINKS then estimates the relative abundances of these transcripts based on how many reads support each one, taking&lt;br /&gt;
into account biases in library preparation protocols.  CUFFLINKS is part of a sequence alignment and analysis tool chain developed&lt;br /&gt;
at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics&lt;br /&gt;
and Computational Biology.  The other tools in this collection, BOWTIE, SAMTOOLS, and TOPHAT are also installed at&lt;br /&gt;
the CUNY HPC Center.Additional information can be found at the CUFFLINKS home page here [http://abacus.gene.ucl.ac.uk/software.html].&lt;br /&gt;
More information about our installation can be found here [[CUFFLINKS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== D == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;DL_POLY&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
DL_POLY is a general purpose molecular dynamics simulation package developed at Daresbury Laboratory by W. Smith, T.R. Forester and I.T. Todorov. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Both serial and parallel versions are available. The original package was developed by the Molecular Simulation Group (now part of the Computational Chemistry Group, MSG) at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council (EPSRC) for the EPSRC&#039;s Collaborative Computational Project for the Computer Simulation of Condensed Phases ( CCP5). Later developments were also supported by the Natural Environment Research Council through the eMinerals project. The package is the property of the Central Laboratory of the Research Councils, UK. More information about our installation and use can be found here [[DL_POLY]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== E == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaML stands for Exascale Maximum Likelihood (ExaML) code for phylogenetic inference using MPI. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The code is installed only on Penzias and implements the popular RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. &lt;br /&gt;
&lt;br /&gt;
It uses a radically new MPI parallelization approach that yields improved paralll efficiency, in particular on partitioned multi-gene or whole-genome datasets.&lt;br /&gt;
&lt;br /&gt;
When using ExaML please cite the following paper:&lt;br /&gt;
&lt;br /&gt;
Alexey M. Kozlov, Andre J. Aberer, Alexandros Stamatakis: &amp;quot;ExaML Version 3: A Tool for Phylogenomic Analyses on Supercomputers.&amp;quot; Bioinformatics (2015) 31 (15): 2577-2579.&lt;br /&gt;
&lt;br /&gt;
It is up to 4 times faster than RAxML-Light [1].&lt;br /&gt;
&lt;br /&gt;
As RAxML-Light, ExaML also implements checkpointing, SSE3, AVX vectorization and memory saving techniques.&lt;br /&gt;
&lt;br /&gt;
[1] A. Stamatakis, A.J. Aberer, C. Goll, S.A. Smith, S.A. Berger, F. Izquierdo-Carrasco: &amp;quot;RAxML-Light: A Tool for computing TeraByte Phylogenies&amp;quot;, Bioinformatics 2012; doi: 10.1093/bioinformatics/bts309.&lt;br /&gt;
&lt;br /&gt;
The run script for parallel job is analogous to one for running RAxML on Penzias and Andy.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ExaBayes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. It is installed on Penzias server at HPCC center. &lt;br /&gt;
The installed package is MPI parallel version. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Availability:&#039;&#039;&#039; PENZIAS&lt;br /&gt;
&#039;&#039;&#039;Module file:&#039;&#039;&#039;exabayes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Citation&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian Höhna, Bret Larget, Liang Liu, Marc a Suchard, and John P Huelsenbeck. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539--42, May 2012.&lt;br /&gt;
&lt;br /&gt;
Alexei J Drummond, Marc a Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution, 29(8):1969--73, August 2012. &lt;br /&gt;
&lt;br /&gt;
Clemens Lakner, Paul van der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86--103, February 2008. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use:&#039;&#039;&#039; The example SLURM script to run the FDPPDIV  on PENZIAS is given below&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N &amp;lt;name_of_job&amp;gt;&lt;br /&gt;
#SLURM -l select=1:ncpus=2 &lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 exabayes &amp;lt;input_file&amp;gt; &amp;gt; output_file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
More information about application along with sample workflows are available on ExaBayes web site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://sco.h-its.org/exelixis/web/software/exabayes/manual/index.html#sec-11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== F == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;FDPPDIV&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv is a program for estimating divergence times on a fixed, rooted tree topology. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
FDPPDiv offers two alternative approaches to divergence time estimation. &lt;br /&gt;
The DPPDiv part refers to the Dirichlet Process Prior (DPP) model for divergence &lt;br /&gt;
time estimation, and the F prefix (for Fossil) refers to the new Fossil Birth-Death approach. &lt;br /&gt;
More information about our installation can be found here [[FDPPDIV]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== G == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAMESS-US&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GAMESS is a program for ab initio molecular quantum chemistry.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Briefly, GAMESS can compute SCF wavefunctions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wavefunctions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed by CI, EOM, or TD-DFT procedures. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including infinite order two component scalar corrections, with various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. Nuclear wavefunctions can also be computed, in VSCF, or with explicit treatment of nuclear orbitals by the NEO code. More information, including code, can be found here [[GAMESS-US]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GARLI&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Several sequence types are supported, including nucleotide, amino acid and codon. Version 2.0 adds support for&lt;br /&gt;
partitioned models and morphology-like data types. It is usable on all operating systems, and is written and&lt;br /&gt;
maintained by Derrick Zwickl at the University of Texas at Austin.  Additional information can be found&lt;br /&gt;
on the GARLI Wiki here [https://www.nescent.org/wg_garli/Main_Page]. More information about our installation &lt;br /&gt;
can be found here [[GARLI]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GAUSS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
An easy-to-use data analysis, mathematical and statistical environment based on the powerful, fast and efficient GAUSS Matrix Programming Language.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GAUSS is used to solve real world problems and data analysis problems of exceptionally large &lt;br /&gt;
scale. GAUSS is currently available on ANDY. At the CUNY HPC Center&lt;br /&gt;
GAUSS is typically run in serial mode. (Note:  GAUSS should not be confused with the&lt;br /&gt;
computational chemistry application Gaussian.) More information about our installation can &lt;br /&gt;
be found here [[GAUSS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gaussian09&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is third-party, commercially licensed software from Gaussian, Inc. It is a set of programs for calculating electronic structure.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Gaussian09 is available for general use only on ANDY. The Gaussian User Guide can be found here at [[http://www.gaussian.com]]. More information about our installation can be found here [[GAUSSIAN09]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GMP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
GMP is a library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and &lt;br /&gt;
floating-point numbers. There is no practical limit to the precision except the ones implied by the &lt;br /&gt;
available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a &lt;br /&gt;
regular interface. The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Gnuplot&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Gnuplot is a portable command-line driven graphing utility. It is installed on the following systems:&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
:* Karle under /usr/bin/gnuplot&lt;br /&gt;
:* Andy under /share/apps/gnuplot/default/bin/gnuplot&lt;br /&gt;
&lt;br /&gt;
Extensive documentation of gnuplot is available at the [http://www.gnuplot.info/  gnuplot&#039;s homepage].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GenomePop2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is a newer and specialized version of the older program GenomePop. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GenomePop2 is designed to manage SNPs under more flexible and useful settings that are controlled by the user.  &lt;br /&gt;
If you need models with more than 2 alleles you should use the older GenomePop version of the program.  &lt;br /&gt;
&lt;br /&gt;
GenomePop2 allows the forward simulation of sequences of biallelic positions. As in the previous version, a number of evolutionary&lt;br /&gt;
and demographic settings are allowed. Several populations under any migration model can be implemented. Each population consists&lt;br /&gt;
of a number N of individuals.  Each individual is represented by one (haploids) or two (diploids) chromosomes with constant or variable&lt;br /&gt;
(hotspots) recombination between binary sites. The fitness model is multiplicative with each derived allele having a multiplicate effect&lt;br /&gt;
of (1-s * h-E) onto the global fitness value. By default E=0 and h=0.5 in diploids, but 1 in homozygotes or in haploids. Selective nucleotide&lt;br /&gt;
sites undergoing directional selection (positive or negative) in different populations can be defined. In addition, bottlenecks and/or&lt;br /&gt;
population expansion scenarios can be settled by the user during a desired number of generations. Several runs can be executed and&lt;br /&gt;
a sample of user-defined size is obtained for each run and population.  For more detail on how to use GenomePop2, please visit the&lt;br /&gt;
web site here [http://webs.uvigo.es/acraaj/GenomePop2.htm]. More information about our installation can be found here [[GENOMEPOP2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GROMACS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS (Groningen Machine for Chemical Simulations)&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
GROMACS is a full-featured suite of free software, licensed under the GNU&lt;br /&gt;
General Public License to perform molecular dynamics simulations -- in other words, to simulate the behavior of molecular&lt;br /&gt;
systems with hundreds to millions of particles using Newton&#039;s equations of motion.  It is primarily used for research on&lt;br /&gt;
proteins, lipids, and polymers, but can be applied to a wide variety of chemical and biological research questions.&lt;br /&gt;
&lt;br /&gt;
Details and submission scripts for production runs can be found at:&lt;br /&gt;
http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/gromacs&lt;br /&gt;
Please note that preparing molecular system for simulation via GROMACS tools, cannot be done on login node. Instead the users must either use their own workstation or use interactive or development queues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;GPAW&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It uses real-space uniform grids and multigrid methods, atom-centered basis-functions or&lt;br /&gt;
plane-waves. GPAW calculations are controlled through scripts written in the programming language &lt;br /&gt;
Python. GPAW relies on the Atomic Simulation Environment (ASE), which is a Python package&lt;br /&gt;
that helps to describe atoms. The ASE package also handles molecular dynamics, analysis, &lt;br /&gt;
visualization, geometry optimization and more. More information about our installation can &lt;br /&gt;
be found here [[GPAW]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== H ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Hapsembler&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hapsembler is a haplotype-specific genome assembly toolkit that is designed for genomes that are rich in SNPs and other types of polymorphism. Hapsembler can be used to assemble reads from a variety of platforms including Illumina and Roche/454.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;  Hapsembler is currently installed on Appel system. In order to access velvet binaries load the velvet module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load hapsembler&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOOMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Performs general purpose particle dynamics simulations, taking advantage of NVIDIA GPUs to attain a level of performance&lt;br /&gt;
equivalent to many processor cores on a fast cluster.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Unlike some other applications in the particle and molecular dynamics space, HOOMD developers have worked to implement &lt;br /&gt;
all of the code&#039;s computationally intensive kernels on the GPU, although currently only single node, single-GPU or &lt;br /&gt;
OpenMP-GPU runs are possible. There is no MPI-GPU or distributed parallel GPU version available at this time.&lt;br /&gt;
&lt;br /&gt;
HOOMD&#039;s object-oriented design patterns make it both versatile and expandable. Various types of potentials, integration methods&lt;br /&gt;
and file formats are currently supported, and more are added with each release. The code is available and open source, so anyone&lt;br /&gt;
can write a plugin or change the source to add additional functionality.  Simulations are configured and run using simple python&lt;br /&gt;
scripts, allowing complete control over the force field choice, integrator, all parameters, how many time steps are run, etc.&lt;br /&gt;
The scripting system is designed to be as simple as possible to the non-programmer.&lt;br /&gt;
&lt;br /&gt;
The HOOMD development effort is led by the Glotzer group at the University of Michigan, but many groups from different universities&lt;br /&gt;
have contributed code that is now part of the HOOMD main package, see the credits page for the full list. The HOOMD website and&lt;br /&gt;
documentation are available here [http://codeblue.umich.edu/hoomd-blue/about.html]. More information about our installation can be&lt;br /&gt;
found here [[HOOMD]]. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HOPSPACK&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
HOPSPACK stands for Hybrid Optimization Parallel Search Package designed to help users to solve wide range of derivative free optimization problems.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The later can be noisy, non-convex or non-smooth ones.  The basic optimization problem addressed is to minimize objective function on n unknowns f(x) subject to constrains: $A_I$th&amp;gt;Ax ≥ bi Aex = be ci(x) ≥ 0 ce(x) = 0 l≤x≤u&lt;br /&gt;
The first two constraints specify linear inequalities and equalities with coefficient matrices AI and AE. The next two constraints describe nonlinear inequalities and equalities captured in functions cI(x) and cE(x). The final constraints denote lower and upper bounds on the variables. HOPSPACK allow variables to be continuous or integer-valued and has provisions for multi-objective optimization problems. In general, functions f(x),cI(x), and cE(x) can be noisy and nonsmooth, although most algorithms perform best on determinate functions with continuous derivatives.&lt;br /&gt;
&lt;br /&gt;
The users are allowed to design and implement their own solver either by writing their own code or by building existing solvers already in a framework. Because all solvers (called citizens) are members of the same global class they can share assigned resources.   &lt;br /&gt;
The main features of the package are:&lt;br /&gt;
&lt;br /&gt;
-	Only function values are required for the optimization.&lt;br /&gt;
-	The user must provide a separate program that can evaluate the objective and nonlinear constraint functions at a given point. &lt;br /&gt;
-	A robust implementation of the Generating Set Search (GSS) solver is supplied, including the capability to handle linear constraints. &lt;br /&gt;
-	Multiple solvers can run simultaneously and are easily configured to share information.&lt;br /&gt;
-	Solvers may share a cache of computed function and constraint evaluations to eliminate duplicate work.&lt;br /&gt;
-	Solvers can initiate and control sub-problems&lt;br /&gt;
Continuation -&amp;gt; [[HOPSACK]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HONDO PLUS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Hondo Plus is a versatile electronic structure code that combines work from&lt;br /&gt;
the original Hondo application developed by Harry King in the lab of Michel Dupuis&lt;br /&gt;
and John Rys, and that of numerous subsequent contributers. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is currently distributed from the research lab of Dr. Donald Truhlar at the University &lt;br /&gt;
of Minnesota.  Part of the advantage of Hondo Plus is the availability of source&lt;br /&gt;
implementations of a wide variety of model chemistries developed over its life time&lt;br /&gt;
that researchers can adapt to their particular needs.  The license to use the code requires&lt;br /&gt;
a literature citation which is documented in the Hondo Plus 5.1 manual found&lt;br /&gt;
at:&lt;br /&gt;
&lt;br /&gt;
http://comp.chem.umn.edu/hondoplus/HONDOPLUS_Manual_v5.1.2007.2.17.pdf &lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[HONDO PLUS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;HUMAnN2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data (typically millions of short DNA/RNA reads). HUMAnN2 is the next generation of HUMAnN (HMP Unified Metabolic Analysis Network). Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/humann2&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== I ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;IMa2&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The IMa2 application performs basic calculations ‘Isolation with Migration’ using Bayesian inference and Markov &lt;br /&gt;
chain Monte Carlo methods. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The only major conceptual addition to IMa2 that makes it different from the&lt;br /&gt;
original IMa  program is that it can handle data from multiple populations. This requires that the user &lt;br /&gt;
specify a phylogenetic tree. Importantly, the tree must be rooted, and the sequence in time of internal&lt;br /&gt;
nodes must be known and specified. More information on the IMa2 and IMa can be found in the user&lt;br /&gt;
manual here [http://lifesci.rutgers.edu/%7Eheylab/ProgramsandData/Programs/IMa2/Using_IMa2_8_24_2011.pdf].&lt;br /&gt;
Information about our installation can be found here [[IMA2]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;I-TASSER&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
I-TASSER  is a platform for protein structure and function predictions. 3D models are built based on multiple-threading alignments by LOMETS and iterative template fragment assembly simulations; function inslights are derived by matching the 3D models with BioLiP protein function database. Details and submission scripts can be found at: http://wiki.csi.cuny.edu/cunyhpc/index.php/Applications_Environment/itasser&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== J ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;JULIA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. Julia is installed on Penzias.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== L ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMARC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMARC is a program which estimates population-genetic parameters such as population size, population growth rate,&lt;br /&gt;
recombination rate, and migration rates.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It approximates a summation over all possible genealogies that could explain&lt;br /&gt;
the observed sample, which may be sequence, SNP, microsatellite, or electrophoretic data.  LAMARC and its sister program&lt;br /&gt;
MIGRATE are successor programs to the older programs Coalesce, Fluctuate, and Recombine, which are no longer being&lt;br /&gt;
supported.  These programs are memory-intensive, but can run effectively on workstations. They are supported on a variety&lt;br /&gt;
of operating systems.  For more detail on LAMARC please visit the website here [http://evolution.genetics.washington.edu/lamarc/index.html],&lt;br /&gt;
read this paper [http://evolution.genetics.washington.edu/lamarc/download/bioinformatics2006-lamarc2.0.pdf], and look&lt;br /&gt;
at the documentation here [http://evolution.genetics.washington.edu/lamarc/documentation/index.html]. More information&lt;br /&gt;
about our installation can be found here [[LAMARC]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LAMMPS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.  &lt;br /&gt;
LAMMPS runs efficiently on single-processor desktop or laptop machines, but is also designed for parallel computers, including clusters with and without GPUs. &lt;br /&gt;
It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel &lt;br /&gt;
machines and Beowulf-style clusters. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source &lt;br /&gt;
code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish.  LAMMPS is designed to be easy to &lt;br /&gt;
modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. A complete description of LAMMPS can be found &lt;br /&gt;
in its on-line manual here [http://lammps.sandia.gov/doc/Manual.html] or from the full PDF manual here [http://lammps.sandia.gov/doc/Manual.pdf]. Information&lt;br /&gt;
about our installation can be found here [[LAMMPS]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;LS-DYNA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From its early development in the 1970s, LS-DYNA has evolved into a general purpose material&lt;br /&gt;
stress, collision, and crash analysis program with many built-in material and structural element&lt;br /&gt;
models. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In recent years, the code has also been adapted for both OpenMP and MPI parallel execution&lt;br /&gt;
on a variety of platforms.  The most recent version, LS-DYNA 7.1.2, is installed on &lt;br /&gt;
ANDY at the CUNY HPC Center under an academic license held by the City College of New York.&lt;br /&gt;
The use of this license to do work that is commercial in anyway is prohibited.&lt;br /&gt;
&lt;br /&gt;
Details on LS-DYNA&#039;s use, input deck construction, and execution options can be found in the LS-DYNA&lt;br /&gt;
manual here [http://ftp.lstc.com/user/manuals/ls-dyna_971_manual_k_rev1.pdf]. All files related&lt;br /&gt;
to the HPC Center installation of version 971 (executables and example inputs) are located in:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
/share/apps/lsdyna/default/[bin,examples]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[LSDYNA]].&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== M ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MAGMA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
MAGMA is a library similar to LAPACK but for hybrid architectures. MAGMA provides implementations for CUDA, Intel Xeon Phi, and OpenCL. &lt;br /&gt;
On CUNY HPCC systems, MAGMA is installed in its CUDA variant only on Penzias.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATHEMATICA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
“Mathematica” is a fully integrated technical computing system that combines fast, high-precision numerical and symbolic computation with data visualization and programming capabilities. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Mathematica is currently installed on the CUNY HPC Center&#039;s ANDY cluster (andy.csi.cuny.edu) and KARLE standalone server (karle.csi.cuny.edu). The basics of running Mathematica on CUNY HPC systems are present here.  Additional information on how to use Mathematica can be found at http://www.wolfram.com/learningcenter/&lt;br /&gt;
&lt;br /&gt;
More information is available in this wiki, find it here [[MATHEMATICA]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MATLAB&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MATLAB high-performance language for technical computing&lt;br /&gt;
integrates computation, visualization, and programming in an&lt;br /&gt;
easy-to-use environment where problems and solutions are expressed in&lt;br /&gt;
familiar mathematical notation.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Typical uses include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Math and computation&lt;br /&gt;
&lt;br /&gt;
Algorithm development&lt;br /&gt;
&lt;br /&gt;
Data acquisition&lt;br /&gt;
&lt;br /&gt;
Modeling, simulation, and prototyping&lt;br /&gt;
&lt;br /&gt;
Data analysis, exploration, and visualization&lt;br /&gt;
&lt;br /&gt;
Scientific and engineering graphics&lt;br /&gt;
&lt;br /&gt;
Application development, including graphical user interface building&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[MATLAB]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MET (Model Evaluation Tools)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MET was developed by the National Center for Atmospheric Research (NCAR) Developmental Testbed Center (DTC) through the generous support of the U.S. Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;MET is designed to be a highly-configurable, state-of-the-art suite of verification tools. It was developed using output from the Weather Research and Forecasting (WRF) modeling system but may be applied to the output of other modeling systems as well.&lt;br /&gt;
&lt;br /&gt;
MET provides a variety of verification techniques, including:&lt;br /&gt;
&lt;br /&gt;
*Standard verification scores comparing gridded model data to point-based observations&lt;br /&gt;
*Standard verification scores comparing gridded model data to gridded observations&lt;br /&gt;
*Spatial verification methods comparing gridded model data to gridded observations using neighborhood, object-based, and intensity-scale decomposition approaches&lt;br /&gt;
*Probabilistic verification methods comparing gridded model data to point-based or gridded observations&lt;br /&gt;
&lt;br /&gt;
More information about use and set-up can be found here [[MET]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Migrate&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Migrate estimates population parameters, effective population sizes and migration rates&lt;br /&gt;
of n populations, using genetic data.  It uses a coalescent theory approach taking into&lt;br /&gt;
account the history of mutations and the uncertainty of the genealogy.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;The estimates of the parameter values are achieved by either a Maximum likelihood (ML) approach or Bayesian&lt;br /&gt;
inference (BI).  Migrate&#039;s output is presented in an TEXT file and in a PDF file. The PDF file&lt;br /&gt;
eventually will contain all possible analyses including histograms of posterior distributions.&lt;br /&gt;
More information about our installation can be found here [[MIGRATE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MPFR&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. MPFR has continuously been supported by &lt;br /&gt;
the INRIA and the current main authors come from the Caramel and AriC project-teams at Loria (Nancy, France) and LIP (Lyon, France) respectively; see &lt;br /&gt;
more on the credit page.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MPFR is based on the GMP multiple-precision library. The main goal of MPFR is to provide a library for multiple-precision &lt;br /&gt;
floating-point computation which is both efficient and has a well-defined semantics. It copies the good ideas from the ANSI/IEEE-754 standard for &lt;br /&gt;
double-precision floating-point arithmetic (53-bit significant). The library is installed on PENZIAS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MRBAYES&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MrBayes is a program for the Bayesian estimation of phylogeny.  Bayesian inference of&lt;br /&gt;
phylogeny is based upon a quantity called the posterior probability distribution of trees,&lt;br /&gt;
which is the probability of a tree conditioned on certain observations.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The conditioning is&lt;br /&gt;
accomplished using Bayes&#039;s theorem. The posterior probability distribution of trees is&lt;br /&gt;
impossible to calculate analytically; instead, MrBayes uses a simulation technique called&lt;br /&gt;
Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.&lt;br /&gt;
More information about our installation can be found here [[MRBAYES]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;msABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
msABC is a program for simulating various neutral evolutionary demographic scenarios&lt;br /&gt;
based on the software ms (Hudson 2002). msABC extends ms, calculating a multitude of&lt;br /&gt;
summary statistics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Therefore, msABC is suitable for performing the sampling step of an&lt;br /&gt;
Approximate Bayesian Computation analysis (ABC), under various neutral demographic&lt;br /&gt;
models. The main advantages of msABC are (i) use of various prior distributions, such as&lt;br /&gt;
uniform, Gaussian, log-normal, gamma, (ii) implementation of a multitude summary statistics&lt;br /&gt;
for one or more populations, (iii) efficient implementation, which allows the analysis of&lt;br /&gt;
hundrends of loci and chromosomes even in a single computer, (iv) extended flexibility, such&lt;br /&gt;
as simulation of loci of variable size and simulation of missing data.&lt;br /&gt;
More information about our installation can be found here [[msABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;MSMS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
MSMS is a tool to generate sequence samples under both neutral models and single locus selection models.&lt;br /&gt;
MSMS permits  the full range of demographic models provided by its relative MS (Hudson, 2002).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
In particular, it allows for multiple demes with arbitrary migration patterns, population growth and decay in each deme, and&lt;br /&gt;
for population splits and mergers. Selection (including dominance) can depend on the deme and also change&lt;br /&gt;
with time.&lt;br /&gt;
More information about our installation can be found here [[MSMS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== N ==&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NAMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NAMD is a parallel molecular dynamics code designed for high-performance simulation&lt;br /&gt;
of large biomolecular systems. [http://www.ks.uiuc.edu/Research/namd].&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
More information about our installation can be found here [[NAMD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Network Simulator-2 (NS2)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NS2 is a discrete event simulator targeted at networking research. NS2 provides&lt;br /&gt;
substantial support for simulation of TCP, routing, and multicast protocols over&lt;br /&gt;
wired and wireless (local and satellite) networks.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It is installed on BOB at the CUNY HPC Center. For more detailed information look here [http://www.isi.edu/nsnam/ns/ here].&lt;br /&gt;
More information about our installation can be found here [[NS2]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;NWChem&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
NWChem is an ab initio computational chemistry software package which also includes molecular dynamics (MM, MD) and coupled, quantum mechanical and molecular dynamics functionality (QM-MD).&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
NWChem has been developed by the Molecular Sciences Software group at the Department of Energy&#039;s EMSL. The software is available on PENZIAS and ANDY.&lt;br /&gt;
More information about our installation can be found here [[NWChem]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== O == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Octopus&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Octopus is a pseudopotential real-space package aimed at the simulation of the electron-ion dynamics of one-, two-, and three-dimensional ﬁnite systems subject to time-dependent electromagnetic ﬁelds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The program is based on time-dependent density-functional theory (TDDFT) in the Kohn-Sham scheme. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. The program has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems.&lt;br /&gt;
More information about our installation can be found here [[OCTOPUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenMM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenMM is both a library and a stand-alone application which provides tools for modern molecular modeling simulation. As a library it can be hooked into any code, allowing that code to do molecular modeling with minimal extra coding.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Moreover, OpenMM has a strong emphasis on hardware acceleration via GPU, thus providing not just a consistent API, but much greater performance than what one could get from just about any other code available. OpenMM was developed as a  part of Physics-Based Simulation project with project leader prof. Pande.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenFOAM&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenFOAM is before everything a library which users may incorporate in their own code(s). The OpenFOAM is installed on PENZIAS.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
More information about our installation can be found here [[OpenFOAM]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;OpenSees&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
OpenSees, the Open System for Earthquake Engineering Simulation, is an object-oriented, open source software framework.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It allows users to create both serial and parallel finite element computer applications for simulating the response of structural and geotechnical systems subjected to earthquakes and other hazards. OpenSees is primarily written in C++ and uses several Fortran and C numerical libraries for linear equation solving, and material and element routines. The software is installed on PENZIAS.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ORCA&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program ORCA is electronic structure program capable to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.&lt;br /&gt;
More information about our installation can be found here [[ORCA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== P == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;ParGAP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
ParGAP is build on top of GAP system. The later is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The ParGAP (Parallel GAP) package itself provides a way of writing parallel programs using the GAP language. Former names of the package were ParGAP/MPI and GAP/MPI; the word MPI refers to Message Passing Interface, a well-known standard for parallelism. ParGAP is based on the MPI standard, and this distribution includes a subset implementation of MPI, to provide a portable layer with a high level interface to BSD sockets.&lt;br /&gt;
More information about our installation can be found here [[ParGAP]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;POPABC&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PopABC is a computer package to estimate historical demographic parameters of closely related species/populations (e.g. population size, migration rate, mutation rate, recombination rate, splitting events) within a Isolation with migration model.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
The software performs coalescent simulation in the framework of approximate Bayesian computation (ABC, Beaumont et al, 2002). PopABC can also be used to perform Bayesian model choice to discriminate between different demographic scenarios. The program can be used either for research or for education and teaching purposes. Further details and a manual can be found at the POPABC website here [http://code.google.com/p/popabc]&lt;br /&gt;
More information about our installation can be found here [[POPABC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHOENICS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHOENICS is an integrated Computational Fluid Dynamics (CFD) package for the preparation, simulation, and visualization of&lt;br /&gt;
processes involving fluid flow, heat or mass transfer, chemical reaction, and/or combustion in engineering equipment, building&lt;br /&gt;
design, and the environment.  More detail is available at the CHAM website, here http://www.cham.co.uk. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Although we expect most users to pre- and post-process their jobs on office-local clients, the CUNY HPC Center has installed&lt;br /&gt;
the Unix version of the &#039;&#039;entire&#039;&#039; PHOENICS package on ANDY.   PHOENICS is installed in /share/apps/phoenics/default where all&lt;br /&gt;
the standard PHOENICS directories are located (d_allpro, d_earth, d_enviro, d_photo, d_priv1, d_satell, etc.).  Of particular interest&lt;br /&gt;
on ANDY is the MPI parallel version of the &#039;earth&#039; executable &#039;parexe&#039; which makes full use of the parallel processing power of the &lt;br /&gt;
ANDY cluster for larger individual jobs.  While the parallel scaling properties of PHOENICS jobs will vary depending on the job size,&lt;br /&gt;
processor type, and the cluster interconnect, larger work loads will generally scale and run efficiently on from 8 to 32 processors,&lt;br /&gt;
while smaller problems will scale efficiently only up to about 4 processors.  More detail on parallel PHOENICS is available at&lt;br /&gt;
http://www.cham.co.uk/products/parallel.php.   Aside from the tightly coupled MPI parallelism of &#039;parexe&#039;, users can run multiple&lt;br /&gt;
instances of the non-parallel modules on ANDY (including the serial &#039;earexe&#039; module) when a parametric approach can be used&lt;br /&gt;
to solve their problems.&lt;br /&gt;
More information about our installation can be found here [[PHOENICS]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PHRAP-PHRED&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
PHRAP and PHRED are part of the DNA sequence analysis tool set that also includes the programs&lt;br /&gt;
CROSSMATCH and SWAT.  These tools are describe in detail here [http://www.phrap.org/phredphrapconsed.html],&lt;br /&gt;
but a brief description of both, extracted from their manuals, follows.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
PHRED and PHRAP (along with CONSED) can be used for both small sequence assemblies and larger shotgun analyses. This makes the&lt;br /&gt;
tools a perhaps under-utilized set for smaller non-genomic groups.  Some variables may need to be adjusted,&lt;br /&gt;
particularly in CONSED, but researchers that have multiple sequences from a small locus can use the &lt;br /&gt;
suite, starting from their chromatogram files.  &lt;br /&gt;
More information about our installation can be found here [[PHRAP-PHRED]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;PyRAD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Reduced-representation genomic sequence data (e.g., RADseq, GBS, ddRAD) are commonly used to study population-level research questions and consequently most software packages for assembling or analyzing such data are designed for sequences with little variation across samples.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Phylogenetic analyses typically include species with deeper divergence times (more variable loci across samples) and thus a different approach to clustering and identifying orthologs will perform better. pyRAD is intended for use with any type of restriction-site associated DNA. It currently supports RAD, ddRAD, PE-ddRAD, GBS, PE-GBS, EzRAD, PE-EzRAD, 2B-RAD, nextRAD, and can be extended to other types.&lt;br /&gt;
More information about our installation can be found here [[PyRAD]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Python&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs. [http://www.python.org/]&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
There are two supported versions installed on Andy system: &lt;br /&gt;
&lt;br /&gt;
* Python 3.1.3 located under /share/apps/python/3.1.3/bin&lt;br /&gt;
* Python 2.7.3 located under /share/apps/epd/7.3-2/bin&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[PYTHON]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Installing Python packages&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Users may install python packages/modules in their own space.  Many packages available in Python repositories can be installed easily with PIP manager, which is available  in any of Anaconda and Miniconda builds.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Users must remember that using PIP without first loading the module for python will cause the installed modules to match system python on login node only. However the python interpreter available (after login module) on all nodes is installed in /share/usr/compilers/python space. Thus when installing packages in user space it is very important to follow the procedure outlined below. The given example demonstrates how users can install package &amp;quot;guppy&amp;quot; in their own space:&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Anaconda build:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/2.7.13_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Anaconda build&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/3.6.0_anaconda&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 2.7.13 in Miniconda&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda2&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python 3.6.0 in Miniconda 3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load python/miniconda3&lt;br /&gt;
pip install guppy --user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check if the package is properly installed type:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip list | grep guppy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== Q == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;QIIME&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
QIIME (pronounced &amp;quot;chime&amp;quot;) stands for Quantitative Insights Into Microbial Ecology. QIIME is a pipeline application that uses numerous third-party applications.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
QIIME takes users from their raw sequencing output through initial analyses such as OTU picking, taxonomic assignment, and construction of phylogenetic trees from representative sequences of OTUs, and through downstream statistical analysis, visualization, and production of publication-quality graphics.&lt;br /&gt;
More information about our installation can be found here [[QIIME]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== R == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;R&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
R is a free software environment for statistical computing and graphics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:15px;&amp;quot; &amp;gt;General Notes&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
R language has become a de facto standard among statisticians for the development of statistical software, and is widely used for statistical software development and data analysis. R is available on the following HPCC&#039;s servers: Karle, Penzias, Appel and Andy. Karle is the only machine where R can be used without submitting jobs to SLURM manager. On all other systems users must submit their R jobs via SLURM batch scheduler.&lt;br /&gt;
More information about our installation can be found here [[R]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;RAXML&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Randomized Axelerated Maximum Likelihood (RAxML) is a program for sequential and parallel&lt;br /&gt;
maximum likelihood based inference of large phylogenetic trees.  It is a descendent of fastDNAml&lt;br /&gt;
which in turn was derived from Joe Felsentein’s DNAml which is part of the PHYLIP package.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
RAxML is installed at the CUNY HPC Center on ANDY.  Multiple versions are available. RAxML is available in both serial and MPI parallel versions.  The MPI-parallel version should be run on four or more cores. RaxML parallel MPI version is installed on Penzias. &lt;br /&gt;
More information about our installation can be found here [[RAXML]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== S == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAGE&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Sage can be used to study elementary and advanced, pure and applied mathematics.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
This includes a huge range of mathematics, including basic algebra, calculus, elementary to very&lt;br /&gt;
advanced number theory, cryptography, numerical computation, commutative algebra, group&lt;br /&gt;
theory, combinatorics, graph theory, exact linear algebra and much more.&lt;br /&gt;
More information about our installation can be found here [[SAGE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAMTOOLS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAMTOOLS provide various utilities for manipulating alignments in the SAM format, including sorting,&lt;br /&gt;
merging, indexing and generating alignments in a per-position format.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments.  SAM is compact format&lt;br /&gt;
aims to be a format that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Is flexible enough to store all the alignment information generated by various alignment programs;&lt;br /&gt;
&lt;br /&gt;
Is simple enough to be easily generated by alignment programs or converted from existing formats;&lt;br /&gt;
&lt;br /&gt;
Allows most of operations on the alignment to work without loading the whole alignment into memory;&lt;br /&gt;
&lt;br /&gt;
Allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More information about our installation can be found here [[SAMTOOLS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;SAS&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
SAS (pronounced &amp;quot;sass&amp;quot;, originally Statistical Analysis System) is an integrated system of software products provided by SAS Institute Inc.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It enables the programmer to perform:&lt;br /&gt;
:* data entry, retrieval, management, and mining&lt;br /&gt;
:* report writing and graphics&lt;br /&gt;
:* statistical analysis&lt;br /&gt;
:* business planning, forecasting, and decision support&lt;br /&gt;
:* operations research and project management&lt;br /&gt;
:* quality improvement&lt;br /&gt;
:* applications development&lt;br /&gt;
:* data warehousing (extract, transform, load)&lt;br /&gt;
:* platform independent and remote computing&lt;br /&gt;
More information about our installation can be found here [[SAS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Stata/MP&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Stata is a complete, integrated statistical package that provides tools for data analysis, data management, and graphics. Stata/MP takes advantage of multiprocessor computers. CUNY HPC Center is licensed to use Stata on up to 8 cores. &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Currently Stata/MP is available for users on Karle (karle.csi.cuny.edu). &lt;br /&gt;
More information about our installation can be found here [[STATA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structurama&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Structurama is a program for inferring population structure from genetic data. The program assumes that the sampled loci&lt;br /&gt;
are in linkage equilibrium and that the allele frequencies for each population are drawn from a Dirichlet probability distribution. Two different models for population structure are implemented.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
First, Structurama offers the method of Pritchard et al. (2000) in which the number of populations is considered fixed. The program also allows the number of populations to be a random variable following a Dirichlet process prior(Pella and Masuda, 2006; Huelsenbeck and Andolfatto, 2007).&lt;br /&gt;
More information about our installation can be found here [[STRUCTURAMA]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Structure&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The program Structure is a free software package for using multi-locus genotype data to investigate&lt;br /&gt;
population structure.  Its uses include inferring the presence of distinct populations, assigning individuals&lt;br /&gt;
to populations, studying hybrid zones, identifying migrants and admixed individuals, and estimating&lt;br /&gt;
population allele frequencies in situations where many individuals are migrants or admixed.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;It can be applied to most of the commonly-used genetic markers, including SNPS, microsatellites, RFLPs and AFLPs. More detailed information about Structure can be found at the web site here [http://pritch.bsd.uchicago.edu/structure.html]. Structure is installed on ANDY at the CUNY HPC Center.  Structure is a serial program. &lt;br /&gt;
More information about our installation can be found here [[STRUCTURE]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== T == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Thrust Library (CUDA)&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you&lt;br /&gt;
to implement high performance parallel applications with minimal programming effort through a high-level&lt;br /&gt;
interface that is fully interoperable with CUDA C.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
As of CUDA, Thrust has been integrated into the default&lt;br /&gt;
CUDA distribution. The HPC Center is currently running CUDA as the default on PENZIAS which includes &lt;br /&gt;
Thrust library. More information about our installation can be found here [[THRUST]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;TOPHAT&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized&lt;br /&gt;
genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results&lt;br /&gt;
to identify splice junctions between exons.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
TOPHAT is part of a sequence alignment and analysis tool chain developed at John Hopkins, University of California at Berkeley, and Harvard, and distributed through the Center for Bioinformatics and Computational Biology.&lt;br /&gt;
More information about our installation can be found here [[TOPHAT]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Trinity&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.&lt;br /&gt;
More information about our installation can be found here [[TRINITY]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== U == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;USEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH is a unique sequence analysis tool with thousands of users world-wide.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST. &lt;br /&gt;
More information about our installation can be found here [[USEARCH]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== V == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VELVET&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
Velvet is a set of algorithms for &#039;&#039;de novo&#039;&#039; short read assembly using de Bruijn graphs. It was developed at the European Bioinformatics Institute, Cambridge, UK. More information about our installation can be found here [[VELVET]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VSEARCH&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH is a open source alternative to USEARCH.  &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
VSEARCH stands for vectorized search, as the tool takes advantage of parallelism in the form of SIMD vectorization as well as multiple threads to perform accurate alignments at high speed. VSEARCH uses an optimal global aligner (full dynamic programming Needleman-Wunsch), in contrast to USEARCH which by default uses a heuristic seed and extend aligner. This usually results in more accurate alignments and overall improved sensitivity (recall) with VSEARCH, especially for alignments with gaps. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional details on VSEARCH can be found at: [https://github.com/torognes/vsearch this link]&lt;br /&gt;
&lt;br /&gt;
VSEARCH is installed on Penzias HPC cluster. To start using VSEARCH load corresponding module first:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vsearch  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;VMD&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was developed by The Theoretical and Computational Biophysics Group at the University of Illinois. It is documented on the [http://www.ks.uiuc.edu/Research/vmd/ TCB&#039;s homepage].&lt;br /&gt;
&lt;br /&gt;
VMD is installed on Karle. To use it within command-line interface login to Karle as usual and start VMD by typing &amp;quot;vmd&amp;quot; followed by return. Or alternatively use the full path: &lt;br /&gt;
&amp;quot;/share/apps/vmd/default/bin/vmd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to use VMD in GUI-mode, login to Karle  with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start VMD as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== W == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;WRF&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
The Weather Research and Forecasting (WRF) model is a specific computer program with dual use for both weather&lt;br /&gt;
forecasting and weather research.&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
It was created through a partnership that includes the National Oceanic and Atmospheric&lt;br /&gt;
Administration (NOAA), the National Center for Atmospheric Research (NCAR), and more than 150 other organizations&lt;br /&gt;
and universities in the United States and abroad. WRF is the latest numerical model and application to be adopted by NOAA&#039;s&lt;br /&gt;
National Weather Service as well as the U.S. military and private meteorological services. It is also being adopted by&lt;br /&gt;
government and private meteorological services worldwide. More information about our installation can be found here [[WRF]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
== X == &lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;p style=&amp;quot;font-size:17px;&amp;quot; &amp;gt;Xmgrace&amp;lt;/p&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Xmgrace is developed at Plasma Laboratory, Weizmann Institute of Science. More information about it&#039;s capabilities can be found at the web page http://plasma-gate.weizmann.ac.il/Grace/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Grace is installed on Karle. To use it within command-line interface login to Karle as usual and start Grace by typing &amp;quot;xmgrace&amp;quot; followed by return. Or alternatively use the full path: &amp;quot;/share/apps/xmgrace/default/grace/bin/xmgrace&amp;quot;&lt;br /&gt;
In order to use Grace in GUI-mode, login to Karle with -X option (see [http://wiki.csi.cuny.edu/cunyhpc/index.php/Main_Page#X11_Forwarding_or_Tunneling this article] &lt;br /&gt;
for details) and start Xmgrace as described above.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt; &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Modules_and_Available_Third_Party_Software&amp;diff=133</id>
		<title>Modules and Available Third Party Software</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Modules_and_Available_Third_Party_Software&amp;diff=133"/>
		<updated>2022-10-27T19:48:03Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Modules and available third party software=&lt;br /&gt;
==Modules==&lt;br /&gt;
“Modules” makes it easier for users to run a standard or customized application and/or system environment. HPCC uses &#039;&#039;&#039;Lmod&#039;&#039;&#039; - an advanced module system that easily handles the MODULEPATH hierarchical problem common in UNIX based &amp;quot;modules&amp;quot; implementation. Application packages can be loaded and unloaded cleanly through the module system using modulefiles. This includes easily adding or removing directories to the PATH environment variable. Modulefiles for Library packages provide environment variables that specify where the library and header files can be found.  &lt;br /&gt;
&lt;br /&gt;
All the popular shells are supported: &#039;&#039;&#039;bash, ksh, csh, tcsh, zsh.&#039;&#039;&#039; LMOD is also available for perl and python. Modules is available on  KARLE, PENZIAS, APPEL and SALK.&lt;br /&gt;
&lt;br /&gt;
===Modules - getting started ===&lt;br /&gt;
The basic module commands are listed below. Note that almost all applications have default version and several other versions. The default version is marked with (D). For example:&lt;br /&gt;
&lt;br /&gt;
 python/2.7.13_anaconda       (D)&lt;br /&gt;
&lt;br /&gt;
The default version can be  loaded via its short name: . Non default version(s) require use of their full name. For example:&lt;br /&gt;
&lt;br /&gt;
 module load python&lt;br /&gt;
&lt;br /&gt;
will load the default 2.7.13_anaconda version of python interpreter.To load non-default 3.7.6  the user must type&lt;br /&gt;
&lt;br /&gt;
 module load python/3.7.6_anaconda &lt;br /&gt;
&lt;br /&gt;
The module load command can be used to load several application environments at once: &lt;br /&gt;
&lt;br /&gt;
 module load package1 package2 ...&lt;br /&gt;
&lt;br /&gt;
For documentation on “Modules”:&lt;br /&gt;
&lt;br /&gt;
 man module&lt;br /&gt;
&lt;br /&gt;
For help enter:&lt;br /&gt;
&lt;br /&gt;
 module help&lt;br /&gt;
&lt;br /&gt;
To see a list of currently loaded “Modules” run:&lt;br /&gt;
&lt;br /&gt;
 module list&lt;br /&gt;
&lt;br /&gt;
To see a complete list of all modules available on the system run:&lt;br /&gt;
&lt;br /&gt;
 module avail&lt;br /&gt;
&lt;br /&gt;
To show content of a module enter:&lt;br /&gt;
&lt;br /&gt;
 module show &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;module_name&amp;gt;&amp;lt;/font color&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To change from one application to another ( example. default versions of gnu and intel compiler):&lt;br /&gt;
&lt;br /&gt;
 module swap gcc intel&lt;br /&gt;
&lt;br /&gt;
To go back to an initial set of modules:&lt;br /&gt;
&lt;br /&gt;
 module reset&lt;br /&gt;
&lt;br /&gt;
===Using LMOD commands===&lt;br /&gt;
To get a list of all modules available&lt;br /&gt;
 module spider&lt;br /&gt;
&lt;br /&gt;
To get information about a specific module&lt;br /&gt;
 module spider python&lt;br /&gt;
&lt;br /&gt;
===Modules for the advanced user===&lt;br /&gt;
A “Modules” example for advanced users who need to change their environment.&lt;br /&gt;
&lt;br /&gt;
The HPC Center supports a number of different compilers, libraries, and utilities.  In addition, at any given time different versions of the software may be installed.  “Modules” is employed to define a default environment, which generally satisfies the needs of most users and eliminates the need for the user to create the environment.  From time to time, a user may have a specific requirement that differs from the default environment.  &lt;br /&gt;
&lt;br /&gt;
In this example, the user wishes to use a version of the NETCDF library on the HPC Center’s Cray Xe6 (SALK) that is compiled with the Portland Group, Inc. (PGI) compiler instead of the installed default version, which was compiled with the Cray compiler.  The approach to do this is:&lt;br /&gt;
&lt;br /&gt;
:•	Run &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;module list&amp;lt;/font&amp;gt;&#039;&#039;&#039; to see what modules are loaded by default.&lt;br /&gt;
:•	Determine what modules should be unloaded.&lt;br /&gt;
:•	Determine what modules should be loaded.&lt;br /&gt;
:•	Add the needed modules, i.e., &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;module load&amp;lt;/font&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The first step, see what modules are loaded, is shown below.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;user@SALK:~&amp;gt; module list&#039;&#039;&#039;&lt;br /&gt;
 Currently Loaded Modulefiles:&lt;br /&gt;
 &lt;br /&gt;
   1) modules/3.2.6.6&lt;br /&gt;
   2) nodestat/2.2-1.0400.31264.2.5.gem&lt;br /&gt;
   3) sdb/1.0-1.0400.32124.7.19.gem&lt;br /&gt;
   4) MySQL/5.0.64-1.0000.5053.22.1&lt;br /&gt;
   5) lustre-cray_gem_s/1.8.6_2.6.32.45_0.3.2_1.0400.6453.5.1-1.0400.32127.1.90&lt;br /&gt;
   6) udreg/2.3.1-1.0400.4264.3.1.gem&lt;br /&gt;
   7) ugni/2.3-1.0400.4374.4.88.gem&lt;br /&gt;
   8) gni-headers/2.1-1.0400.4351.3.1.gem&lt;br /&gt;
   9) dmapp/3.2.1-1.0400.4255.2.159.gem&lt;br /&gt;
  10) xpmem/0.1-2.0400.31280.3.1.gem&lt;br /&gt;
  11) hss-llm/6.0.0&lt;br /&gt;
  12) Base-opts/1.0.2-1.0400.31284.2.2.gem&lt;br /&gt;
  13) xtpe-network-gemini&lt;br /&gt;
  14) cce/8.0.7&lt;br /&gt;
  15) acml/5.1.0&lt;br /&gt;
  16) xt-libsci/11.1.00&lt;br /&gt;
  17) pmi/3.0.0-1.0000.8661.28.2807.gem&lt;br /&gt;
  18) rca/1.0.0-2.0400.31553.3.58.gem&lt;br /&gt;
  19) xt-asyncpe/5.13&lt;br /&gt;
  20) atp/1.5.1&lt;br /&gt;
  21) PrgEnv-cray/4.0.46&lt;br /&gt;
  22) xtpe-mc8&lt;br /&gt;
  23) cray-mpich2/5.5.3&lt;br /&gt;
  24) SLURM/11.3.0.121723&lt;br /&gt;
&lt;br /&gt;
From the list, we see that the Cray Programming Environment (&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;PrgEnv-cray/4.0.46&amp;lt;/font&amp;gt;&#039;&#039;&#039;) and the Cray Compiler environment are loaded (&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;cce/8.0.7&amp;lt;/font&amp;gt;&#039;&#039;&#039;) by default.  To unload these Cray modules and load in the PGI equivalents we need to know the names of the PGI modules. The &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;module avail&amp;lt;/font&amp;gt;&#039;&#039;&#039; command shows this.&lt;br /&gt;
&lt;br /&gt;
  &#039;&#039;&#039;user@SALK:~&amp;gt; module avail&#039;&#039;&#039;&lt;br /&gt;
  •&lt;br /&gt;
  •&lt;br /&gt;
  •&lt;br /&gt;
&lt;br /&gt;
We see that there are several versions of the PGI compilers and two versions of the PGI Programming Environment installed.  For this example, we are interested in loading PGI&#039;s 12.10 release (not the default, which is &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;pgi/12.6&amp;lt;/font&amp;gt;&#039;&#039;&#039;) and the most current release of the PGI programming environment (&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;PrgEnv-pgi/4.0.46&amp;lt;/font&amp;gt;&#039;&#039;&#039;), which is the default.  &lt;br /&gt;
&lt;br /&gt;
The following module commands will unload the Cray defaults, load the PGI modules mentioned, and load version 4.2.0 of NETCDF compiled with the PGI compilers.&lt;br /&gt;
&lt;br /&gt;
 user@SALK:~&amp;gt; module unload PrgEnv-cray&lt;br /&gt;
 user@SALK:~&amp;gt; module load PrgEnv-pgi&lt;br /&gt;
 user@SALK:~&amp;gt; module unload pgi&lt;br /&gt;
 user@SALK:~&amp;gt; module load pgi/12.10&lt;br /&gt;
 user@SALK:~&amp;gt; &lt;br /&gt;
 user@SALK:~&amp;gt; module load netcdf/4.2.0&lt;br /&gt;
 user@SALK:~&amp;gt;&lt;br /&gt;
 user@SALK;~&amp;gt; cc -V&lt;br /&gt;
 &lt;br /&gt;
 /opt/cray/xt-asyncpe/5.13/bin/cc: INFO: Compiling with CRAYPE_COMPILE_TARGET=native.&lt;br /&gt;
 &lt;br /&gt;
 pgcc 12.10-0 64-bit target on x86-64 Linux &lt;br /&gt;
 Copyright 1989-2000, The Portland Group, Inc.  All Rights Reserved.&lt;br /&gt;
 Copyright 2000-2012, STMicroelectronics, Inc.  All Rights Reserved.&lt;br /&gt;
&lt;br /&gt;
A few additional comments: &lt;br /&gt;
&lt;br /&gt;
:•	The first three commands do not include version numbers and will therefore load or unload the current default versions. &lt;br /&gt;
:•	In the third line, we unload the default version of the PGI compiler (version 12.6), which is loaded with the rest of the PGI Programming Environment in the second line. We then load the non-default and more recent release from PGI, version 12.10 in the fourth line. &lt;br /&gt;
:•	Later, we load NETCDF version 4.2.0 which, because we have already loaded the PGI Programming Environment, will load the version of NETCDF 4.2.0 compiled with the PGI compilers. &lt;br /&gt;
:•	Finally, we check to see which compiler the Cray &amp;quot;cc&amp;quot; compiler wrapper actually invokes after this sequence of module commands by again entering &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;module list&amp;lt;/font&amp;gt;&#039;&#039;&#039;.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=OpenMP,_OpenMP_SMP-Parallel_Program_Compilation,_and_SLURM_Job_Submission&amp;diff=132</id>
		<title>OpenMP, OpenMP SMP-Parallel Program Compilation, and SLURM Job Submission</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=OpenMP,_OpenMP_SMP-Parallel_Program_Compilation,_and_SLURM_Job_Submission&amp;diff=132"/>
		<updated>2022-10-27T19:46:09Z</updated>

		<summary type="html">&lt;p&gt;James: Created page with &amp;quot;== OpenMP, OpenMP SMP-Parallel Program Compilation, and SLURM Job Submission == All the compute nodes on all the the systems at the CUNY HPC Center include at least 2 sockets and multiple cores.  Some have 8 cores (ZEUS, BOB, ANDY), and some have 16 (SALK).  These multicore, SMP compute nodes offer the CUNY HPC Center user community the option of creating parallel programs using the OpenMP Symmetric Multi-Processing (SMP) parallel programming model.  The SMP parallel pro...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OpenMP, OpenMP SMP-Parallel Program Compilation, and SLURM Job Submission ==&lt;br /&gt;
All the compute nodes on all the the systems at the CUNY HPC Center include at least 2 sockets and multiple&lt;br /&gt;
cores.  Some have 8 cores (ZEUS, BOB, ANDY), and some have 16 (SALK).  These multicore, SMP compute nodes offer&lt;br /&gt;
the CUNY HPC Center user community the option of creating parallel programs using the OpenMP Symmetric&lt;br /&gt;
Multi-Processing (SMP) parallel programming model.  The SMP parallel programming with the OpenMP model&lt;br /&gt;
(and other SMP models) is the original parallel processing model because the earliest parallel HPC systems were&lt;br /&gt;
built only with shared memories.  The Cray-XMP (circa 1982) was among the first systems in this class.  Shared memory,&lt;br /&gt;
multi-socket and multi-core designs are now typical of even today&#039;s desktop and portable PC and Mac systems.  &lt;br /&gt;
As the CUNY HPC Center systems, each compute node is similarly a shared-memory, symmetric multi-processing&lt;br /&gt;
system that can compute in parallel using the OpenMP shared-memory model.&lt;br /&gt;
&lt;br /&gt;
In the SMP model, multiple processors work simultaneously within a single program&#039;s memory space (image).&lt;br /&gt;
This eliminates the need to copy data from one program (process) image to another (required by MPI) and&lt;br /&gt;
simplifies the parallel run-time environment significantly.  As such, writing parallel programs to the OpenMP&lt;br /&gt;
standard is generally easier and requires many fewer lines of code.  However, the size of the problem that can&lt;br /&gt;
be addressed using OpenMP is limited by the amount of memory on a single compute node, and the similarly&lt;br /&gt;
the parallel performance improvement to be gained is limited by the number of processors (cores) within that&lt;br /&gt;
single node.&lt;br /&gt;
&lt;br /&gt;
As of Q4 2012 at CUNY&#039;s HPC Center, OpenMP applications can run with a maximum of 16 cores (this is on&lt;br /&gt;
SALK, the Cray XE6m system).  Most of the HPC Center&#039;s other systems are limited to 8 core OpenMP parallelism.&lt;br /&gt;
&lt;br /&gt;
Here, a simple OpenMP parallel version of the standard C &amp;quot;Hello, World!&amp;quot; program is set to run on 8 cores:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;omp.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#define NPROCS 8&lt;br /&gt;
&lt;br /&gt;
int main (int argc, char *argv[]) {&lt;br /&gt;
&lt;br /&gt;
   int nthreads, num_threads=NPROCS, tid;&lt;br /&gt;
&lt;br /&gt;
  /* Set the number of threads */&lt;br /&gt;
  omp_set_num_threads(num_threads);&lt;br /&gt;
&lt;br /&gt;
  /* Fork a team of threads giving them their own copies of variables */&lt;br /&gt;
#pragma omp parallel private(nthreads, tid)&lt;br /&gt;
  {&lt;br /&gt;
&lt;br /&gt;
  /* Each thread obtains its thread number */&lt;br /&gt;
  tid = omp_get_thread_num();&lt;br /&gt;
&lt;br /&gt;
  /* Each thread executes this print */&lt;br /&gt;
  printf(&amp;quot;Hello World from thread = %d\n&amp;quot;, tid);&lt;br /&gt;
&lt;br /&gt;
  /* Only the master thread does this */&lt;br /&gt;
  if (tid == 0)&lt;br /&gt;
     {&lt;br /&gt;
      nthreads = omp_get_num_threads();&lt;br /&gt;
      printf(&amp;quot;Total number of threads = %d\n&amp;quot;, nthreads);&lt;br /&gt;
     }&lt;br /&gt;
&lt;br /&gt;
   }  /* All threads join master thread and disband */&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An excellent and comprehensive tutorial on OpenMP with examples can be found at the &lt;br /&gt;
Lawrence Livermore National Lab web site: (https://computing.llnl.gov/tutorials/openMP)&lt;br /&gt;
&lt;br /&gt;
=== Compiling OpenMP Programs Using the Intel Compiler Suite ===&lt;br /&gt;
&lt;br /&gt;
The intel C compiler requires the &#039;-openmp&#039; option, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc  -o hello_omp.exe -openmp hello_omp.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When run, the program above produces the following output:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ./hello_omp.exe&lt;br /&gt;
Hello World from thread = 0&lt;br /&gt;
Number of threads = 8&lt;br /&gt;
Hello World from thread = 1&lt;br /&gt;
Hello World from thread = 2 &lt;br /&gt;
Hello World from thread = 6&lt;br /&gt;
Hello World from thread = 4&lt;br /&gt;
Hello World from thread = 3&lt;br /&gt;
Hello World from thread = 5&lt;br /&gt;
Hello World from thread = 7&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OpenMP is supported in Intel&#039;s C, C++, and Fortran compilers; as such, a Fortran version of&lt;br /&gt;
the program above could be used to produce similar results.  An important feature of OpenMP&lt;br /&gt;
threads is that they are logical entities that are not by default locked to physical processors.  The code&lt;br /&gt;
above requesting 8 threads would run and produce similar results on a compute node with only&lt;br /&gt;
2 or 4 processors, or even 1 processor.  In these cases, they would simply take more wall-clock&lt;br /&gt;
time to complete.&lt;br /&gt;
&lt;br /&gt;
When threads in excess of the physical number of processors present on the motherboad are requested&lt;br /&gt;
they simply compete for access to actual number of physical cores available.  Under such circumstaces,&lt;br /&gt;
maximum program speed ups are limited to the number of unshared physical processors (cores) available to&lt;br /&gt;
the OpenMP job less the overhead required to start OpenMP (this ignores Intel&#039;s &#039;hyperthreading&#039; which allows&lt;br /&gt;
two threads to share sub-resources not in simultaneous use within a single processor).&lt;br /&gt;
&lt;br /&gt;
=== Compiling OpenMP Programs Using the PGI Compiler Suite ===&lt;br /&gt;
&lt;br /&gt;
The PGI C compiler requires its &#039;-mp&#039; option for OpenMP programs, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pgcc  -o hello_omp.exe -mp hello_omp.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When run this PGI executable will produce the &#039;same&#039; output as shown above, although the order of the print&lt;br /&gt;
statements cannot be predicted and will not necessarily be the same over repeated runs.&lt;br /&gt;
&lt;br /&gt;
OpenMP is supported in PGI&#039;s C, C++, and Fortran compilers; therefore a Fortran version &lt;br /&gt;
of the program above could be used to produce similar results.&lt;br /&gt;
&lt;br /&gt;
=== Compiling OpenMP Programs Using the Cray Compiler Suite ===&lt;br /&gt;
&lt;br /&gt;
The Cray C compiler requires its &#039;-h omp&#039; option for OpenMP programs, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cc  -o hello_omp.exe -h omp hello_omp.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The program produces the same output, and again the order of the print statements&lt;br /&gt;
cannot be predicted and will not necessarily be the same over repeated runs.&lt;br /&gt;
&lt;br /&gt;
OpenMP is supported in Cray&#039;s C, C++, and Fortran compilers; therefore a Fortran version &lt;br /&gt;
of the program above could be used to produce similar results.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039;  As discussed above in the section on serial program compilation, on the Cray the &#039;cc&#039;,&lt;br /&gt;
or &#039;ftn&#039; or &#039;CC&#039; compiler wrappers would end up being used (with their compiler-specific OpenMP&lt;br /&gt;
flags) for each specific compiler suite after the appropriate programming environment module&lt;br /&gt;
was loaded.&lt;br /&gt;
&lt;br /&gt;
=== Compiling OpenMP Programs Using the GNU Compiler Suite ===&lt;br /&gt;
&lt;br /&gt;
The GNU C compiler requires its &#039;-fopenmp&#039; option for OpenMP programs, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gcc  -o hello_omp.exe -fopenmp hello_omp.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The program produces the same output, and again the order of the print statements&lt;br /&gt;
cannot be predicted and will not necessarily be the same over repeated runs.&lt;br /&gt;
&lt;br /&gt;
OpenMP is supported in both GNU&#039;s C, C++, and Fortran compilers; therefore a Fortran&lt;br /&gt;
version of the program above could be used to produce similar results.&lt;br /&gt;
&lt;br /&gt;
=== Submitting an OpenMP Program to the SLURM Batch Queueing System ===&lt;br /&gt;
&lt;br /&gt;
All non-trivial jobs (development or production, parallel or serial) must be&lt;br /&gt;
submitted to HPC Center system &#039;&#039;compute nodes&#039;&#039; from each system&#039;s &#039;&#039;head&#039;&#039;&lt;br /&gt;
or &#039;&#039;login node&#039;&#039; using a SLURM script.  Jobs run interactively on system head&lt;br /&gt;
nodes that place a significant and sustained load on the head node will&lt;br /&gt;
be terminated.  Details on the use of SLURM are presented later in this document;&lt;br /&gt;
however, here we present a basic SLURM script (&#039;my_ompjob&#039;) that can be&lt;br /&gt;
used to submit any OpenMP SMP program for batch processing on one of&lt;br /&gt;
the CUNY HPC Center compute nodes. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N openMP_job&lt;br /&gt;
#SLURM -l select=1:ncpus=8&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to your working directory in SLURM&lt;br /&gt;
# The SLURM_O_WORDIR variable is automatically filled with the path &lt;br /&gt;
# to the directory you submit your job from&lt;br /&gt;
&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# The SLURM_NODEFILE file contains the compute nodes assigned&lt;br /&gt;
# to the job by SLURM.  Uncommenting the next line will show them.&lt;br /&gt;
&lt;br /&gt;
cat $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# It is possible to set the number of threads to be used in&lt;br /&gt;
# an OpenMP program using the environment variable OMP_NUM_THREADS.&lt;br /&gt;
# This setting is not used here because the number of threads (8)&lt;br /&gt;
# was fixed inside the program itself in our example code.&lt;br /&gt;
# export OMP_NUM_THREADS=8&lt;br /&gt;
&lt;br /&gt;
./hello_omp.exe&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When submitted with &#039;qsub my_ompjob&#039; a job ID XXXX is returned and the output &lt;br /&gt;
will be written to the file &#039;openMP_job.oXXXX&#039; where XXXX is the job ID, unless&lt;br /&gt;
otherwise redirected on the command-line.&lt;br /&gt;
&lt;br /&gt;
The key lines in the script are &#039;-l select&#039; and &#039;-l place&#039;.  The first defines (1) resource&lt;br /&gt;
chunk with &#039;-l select=1&#039; and assigns (8) cores to it with &#039;:ncpus=8&#039;.  SLURM must allocate&lt;br /&gt;
these (8) cores on a single node because they are all part of a single SLURM resource &#039;chunk&#039;&lt;br /&gt;
(&#039;chunks&#039; are atomic) to be used in concert by our OpenMP executable, hello_omp.exe.&lt;br /&gt;
&lt;br /&gt;
Next, the line &#039;-l place=free&#039; instructs SLURM to place this chunk anywhere it can find 8 free&lt;br /&gt;
cores.  As mentioned, SLURM resource &#039;chunks&#039; are indivisible across compute nodes;&lt;br /&gt;
and therefore, this job can only be run on a single compute node.  It would therefore&lt;br /&gt;
never run on a system with only 4 cores per compute node and on those with only 8 core&lt;br /&gt;
per node SLURM would have to find a node with no other jobs running on it.  This is exactly&lt;br /&gt;
what we want for an OpenMP job, a one-to-one mapping of physically free cores to OpenMP&lt;br /&gt;
threads requested with no other jobs schedule by SLURM (or our of SLURM&#039;s purvey) to run and&lt;br /&gt;
compete for those 8 cores.&lt;br /&gt;
&lt;br /&gt;
Placement on a node with as many free physical cores as OpenMP threads is optimal for&lt;br /&gt;
OpenMP jobs because each processor assigned to an OpenMP job works within that single&lt;br /&gt;
program&#039;s memory space or image.  If the processors assigned by SLURM were on another&lt;br /&gt;
compute node they would not be usable; if they were assigned to another job on the same&lt;br /&gt;
compute they would not be fully available to the OpenMP program and would delay its&lt;br /&gt;
completion.&lt;br /&gt;
&lt;br /&gt;
Here, the selection of 8 cores will consume all the cores available on a single compute node on&lt;br /&gt;
either BOB or ANDY.  This forces SLURM to find and allocate an entire compute node to the OpenMP&lt;br /&gt;
job.  In this case, the OpenMP job will also have all of the memory the compute has at its disposal&lt;br /&gt;
knowing that no other jobs will be assigned to it by SLURM.  If fewer cores were selected (say 4), SLURM&lt;br /&gt;
could place another job on the same BOB or ANDY compute node using as many as (4) cores.  This&lt;br /&gt;
job would compete for memory resources proportionally, but would have its own cores. SLURM offers&lt;br /&gt;
the &#039;pack:excl&#039; option to force exclusive placement even if the job uses less than all the cores on&lt;br /&gt;
the physical node.  One might wish to do this to run a single core job and have it use all the memory&lt;br /&gt;
on the compute node.&lt;br /&gt;
&lt;br /&gt;
One thing that should be kept in mind when defining SLURM resource requirements and in submitting&lt;br /&gt;
any SLURM script is that jobs with resource requests that are impossible to fulfill on the system where the&lt;br /&gt;
job is submitted will be &#039;&#039;&#039;queued forever and never run&#039;&#039;&#039;.  In our case here, we must know that the&lt;br /&gt;
system that we are submitting this job to has at least 8 processors (cores) available on a single&lt;br /&gt;
physical compute node.  At the HPC Center this job would run on either BOB, ANDY or SALK, but would&lt;br /&gt;
be queued indefinitely on any system that has fewer than 8 cores per physical node. This resource&lt;br /&gt;
mapping requirement applies to any resource that you might request in your SLURM script, not just cores.&lt;br /&gt;
Resource definition and mapping is discussed in greater detail in the SLURM section later in this document.&lt;br /&gt;
&lt;br /&gt;
Note that on SALK, the Cray XE6m system, the SLURM script would require the use of Cray&#039;s compute-node,&lt;br /&gt;
job launch command &#039;aprun&#039;, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N openMP_job&lt;br /&gt;
#SLURM -l select=1:ncpus=16:mem=32768mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -j oe&lt;br /&gt;
#SLURM -o openMP_job.out&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to your working directory in SLURM&lt;br /&gt;
# The SLURM_O_WORDIR variable is automatically filled with the path &lt;br /&gt;
# to the directory you submit your job from&lt;br /&gt;
&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# The SLURM_NODEFILE file contains the compute nodes assigned&lt;br /&gt;
# to the job by SLURM.  Uncommenting the next line will show them.&lt;br /&gt;
&lt;br /&gt;
cat $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# It is possible to set the number of threads to be used in&lt;br /&gt;
# an OpenMP program using the environment variable OMP_NUM_THREADS.&lt;br /&gt;
# This setting is not used here because the number of threads (8)&lt;br /&gt;
# was fixed inside the program itself in our example code.&lt;br /&gt;
# export OMP_NUM_THREADS=8&lt;br /&gt;
&lt;br /&gt;
aprun -n 1 -d 16 ./hello_omp.exe&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here, &#039;aprun&#039; is requesting that one process be allocated to a compute (&#039;-n 1&#039;) and that it be given&lt;br /&gt;
all 16 cores available on a single SALK compute node.  Because the production queue on SALK allows&lt;br /&gt;
no jobs requesting fewer than 16 cores, the &#039;-l select&#039; was also changed.  The define in the original C&lt;br /&gt;
source code should also best be change to set the number of OpenMP threads to 16 so that no &lt;br /&gt;
allocated cores are wasted on the compute node, as in:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#define NPROCS 16&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications_Environment/ADCIRC&amp;diff=131</id>
		<title>Applications Environment/ADCIRC</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications_Environment/ADCIRC&amp;diff=131"/>
		<updated>2022-10-27T19:46:03Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= ADCIRC =&lt;br /&gt;
ADCIRC is a system of programs for solving time-dependent, free-surface, circulation and transport problems in&lt;br /&gt;
two and three dimensions.  These programs utilize the finite element method in space allowing the use of highly flexible,&lt;br /&gt;
unstructured grids. The ADCIRC distribution includes and integrates the METIS tool for unstructured grid generation.&lt;br /&gt;
In addition, ADCIRC includes a distribution of SWAN to which it can be coupled to add a shore wave simulation model.&lt;br /&gt;
&lt;br /&gt;
Typical ADCIRC applications have included: (i) modeling tides and wind driven circulation, (ii) analysis of hurricane storm surge&lt;br /&gt;
and flooding, (iii) dredging feasibility and material disposal studies, (iv) larval transport studies, (v) near shore marine&lt;br /&gt;
operations.  For more detail on using ADCIRC, please visit the ADCIRC website here [http://adcirc.org/index.html] and read&lt;br /&gt;
the ADCIRC manual [http://adcirc.org/documentv49/ADCIRC_title_page.html].  Details on using SWAN with ADCIRC&lt;br /&gt;
can be found here [http://www.caseydietrich.com/swanadcirc] and at the SWAN web site [http://swanmodel.sourceforge.net].&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center has installed version 50.79 on SALK (the Cray) and ANDY (the SGI) for general academic use.  ADCIRC&lt;br /&gt;
can be run in serial or MPI-parallel mode on either system.  ADCIRC has demonstrated good scaling properties up to 512&lt;br /&gt;
cores on SALK and 64 cores on ANDY.  A step-by-step walk through of running an ADCIRC test case in both serial and parallel&lt;br /&gt;
mode follows.&lt;br /&gt;
&lt;br /&gt;
==== Serial Execution ====&lt;br /&gt;
&lt;br /&gt;
Create a directory where all the files needed to run the serial ADCIRC job will be kept.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ mkdir test_sadcirc&lt;br /&gt;
salk$ cd test_sadcirc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy the Shinnecok Inlet example from ADCIRC installation tree and unzip it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ cp /share/apps/adcirc/default/testcase/serial_shinnecock_inlet.zip ./&lt;br /&gt;
salk$ unzip ./serial_shinnecock_inlet.zip &lt;br /&gt;
Archive:  ./serial_shinnecock_inlet.zip&lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.14  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.15  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.16  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.63  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.64  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Change into the unpacked subdirectory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ cd serial_shinnecock_inlet/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There you should find the following files:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ ls&lt;br /&gt;
fort.14  fort.15  fort.16  fort.63  fort.64&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next, create a SLURM script with the following lines in it to be used to submit the serial&lt;br /&gt;
ADCIRC job to the Cray (SALK) SLURM queues.  Note that on SALK running a serial job&lt;br /&gt;
requires allocating (and wasting most of) 16 processors because fractional compute&lt;br /&gt;
nodes cannot be allocated on SALK.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N SADCIRC.test&lt;br /&gt;
#SLURM -l select=16:ncpus=1:mem=2048mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -j oe&lt;br /&gt;
#SLURM -o sadcirc.out&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin ADCRIC Serial Run ...&amp;quot;&lt;br /&gt;
aprun -n 1 /share/apps/adcirc/default/bin/adcirc&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   ADCRIC Serial Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And finally to submit the serial job to the SLURM queue enter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ qsub sadcirc.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Parallel Execution ====&lt;br /&gt;
&lt;br /&gt;
The steps required to run ADCIRC in parallel include some additional mesh partitioning&lt;br /&gt;
and decomposition steps based on the number processors planned for the job.  As before,&lt;br /&gt;
create a directory where all the files needed for the job will be kept:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ mkdir test_padcirc&lt;br /&gt;
salk$ cd test_padcirc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, copy the Shinnecok Inlet example from ADCIRC installation tree and unzip it.  The&lt;br /&gt;
starting point for the serial and parallel tests is the same, but for the parallel case the serial&lt;br /&gt;
data set used above is partitioned and decomposed for the parallel run.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ cp /share/apps/adcirc/default/testcase/serial_shinnecock_inlet.zip ./&lt;br /&gt;
salk$ unzip ./serial_shinnecock_inlet.zip &lt;br /&gt;
Archive:  ./serial_shinnecock_inlet.zip&lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.14  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.15  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.16  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.63  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.64  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Rename and change into directory you just unpacked:&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ mv  serial_shinnecock_inlet  parallel_shinnecock_inlet&lt;br /&gt;
salk$ cd parallel_shinnecock_inlet/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we need to run the ADCIRC preparation program &#039;adcprep&#039; to partition the serial domain&lt;br /&gt;
and decompose problem:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ /share/apps/adcirc/default/bin/adcprep &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When prompted, enter 8 for number of processors to be used in our parallel example here:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  *****************************************&lt;br /&gt;
  ADCPREP Fortran90 Version 2.3  10/18/2006&lt;br /&gt;
  Serial version of ADCIRC Pre-processor   &lt;br /&gt;
  *****************************************&lt;br /&gt;
  &lt;br /&gt;
 Input number of processors for parallel ADCIRC run:&lt;br /&gt;
8&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next, enter 1 to complete partitioning the domain for 8 processors using METIS:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 #-------------------------------------------------------&lt;br /&gt;
   Preparing input files for subdomains.&lt;br /&gt;
   Select number or action:&lt;br /&gt;
     1. partmesh&lt;br /&gt;
      - partition mesh using metis ( perform this first)&lt;br /&gt;
 &lt;br /&gt;
     2. prepall&lt;br /&gt;
      - Full pre-process using default names (i.e., fort.14)&lt;br /&gt;
&lt;br /&gt;
      ...&lt;br /&gt;
&lt;br /&gt;
 #-------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
 calling: prepinput&lt;br /&gt;
&lt;br /&gt;
 use_default =  F&lt;br /&gt;
 partition =  T&lt;br /&gt;
 prep_all  =  F&lt;br /&gt;
 prep_15   =  F&lt;br /&gt;
 prep_13   =  F&lt;br /&gt;
 hot_local  =  F&lt;br /&gt;
 hot_global  =  F&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next, provide that name of the unpartitioned file unzipped from the serial test case, fort.14:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Enter the name of the ADCIRC UNIT 14 (Grid) file:&lt;br /&gt;
fort.14&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will generate some additional output to your terminal and complete the mesh partition step.&lt;br /&gt;
&lt;br /&gt;
You must then run &#039;adcprep&#039; again to decompose the problem.  When prompted enter 8, number&lt;br /&gt;
of processors as before, but this time followed by a 2 to decompose the problem.  When this preparation&lt;br /&gt;
step completes you will find the following files and directories in your working directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ ls&lt;br /&gt;
fort.15     fort.63  fort.80          partmesh.txt  PE0001  PE0003  PE0005  PE0007&lt;br /&gt;
fort.14     fort.16  fort.64     metis_graph.txt  PE0000   PE0002  PE0004  PE0006&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The 8 subdirectories created in the second &#039;adcprep&#039; run contain the partitioned and decomposed&lt;br /&gt;
problem that each MPI processor (8 in this case) will work on. &lt;br /&gt;
&lt;br /&gt;
Copy the parallel ADCIRC binary to the working directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cp /share/apps/adcirc/default/bin/padcirc ./&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point you&#039;ll have all the files needed to run the parallel job. The&lt;br /&gt;
files and directories created and required for this 8 core parallel run are&lt;br /&gt;
shown here:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ls &lt;br /&gt;
adc  fort.14  fort.15  fort.16  fort.80  metis_graph.txt  partmesh.txt &lt;br /&gt;
PE0000/  PE0001/  PE0002/  PE0003/  PE0004/  PE0005/  PE0006/  PE0007/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Create a SLURM script with the following lines in it to be used to submit the parallel&lt;br /&gt;
ADCIRC job to the Cray (SALK) SLURM queues:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SLURM -q production&lt;br /&gt;
#SLURM -N PADCIRC.test&lt;br /&gt;
#SLURM -l select=16:ncpus=1:mem=2048mb&lt;br /&gt;
#SLURM -l place=free&lt;br /&gt;
#SLURM -j oe&lt;br /&gt;
#SLURM -o padcirc.out&lt;br /&gt;
#SLURM -V&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# Change to working directory&lt;br /&gt;
cd $SLURM_O_WORKDIR&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin PADCRIC MPI Parallel Run ...&amp;quot;&lt;br /&gt;
aprun -n 8 /share/apps/adcirc/default/bin/padcirc&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   PADCRIC MPI Parallel Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And finally to submit the parallel job to the SLURM queue enter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ qsub padcirc.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center has also built and provided a parallel-coupled version &lt;br /&gt;
of ADCIRC and SWAN to include surface wave affects in the simulation. This&lt;br /&gt;
executable is called &#039;padcswan&#039; and can be run with largely the same preparation&lt;br /&gt;
steps and the same SLURM script shown above for &#039;padcirc&#039;.  Details on the minor&lt;br /&gt;
differences and additional input files required are available at the SWAN websites&lt;br /&gt;
given above.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Running_Jobs&amp;diff=130</id>
		<title>Running Jobs</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Running_Jobs&amp;diff=130"/>
		<updated>2022-10-27T19:44:56Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
==Running jobs==&lt;br /&gt;
In this section, we discuss the process for running jobs on a HPC system.  Typically the process involves the following:&lt;br /&gt;
&lt;br /&gt;
:•	Having input files within your &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&#039;&#039;&#039;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt; directory on the HPC system you wish to use.&lt;br /&gt;
:•	Set up environment for the job. &lt;br /&gt;
:•	Creating a job submit script that identifies the input files, the application program you wish to use, the compute resources needed to execute the job, and information on where you wish to write your output files.&lt;br /&gt;
:•	Submitting the job script.&lt;br /&gt;
:•	Saving output to the DSMS.&lt;br /&gt;
&lt;br /&gt;
These steps are explained below.&lt;br /&gt;
&lt;br /&gt;
===Input file on &#039;&#039;&#039;/scratch&#039;&#039;&#039;===&lt;br /&gt;
The general case is that you will have input files that have data on which you wish to operate.  To compute or work on these files, they must be stored within the &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&#039;&#039;&#039;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt; directory of the HPC system you wish to use.  These files can come from any of the following sources:&lt;br /&gt;
:•	You can create them using a text editor. Note that Microsoft Word is not text editor suitable for job. You must use plain Linux based text editor such as Vi/Vim, pico, nano, or edit. &lt;br /&gt;
:•	You can copy them from your directory in the DSMS.&lt;br /&gt;
:•	You can copy input files from other places (such as your local computer, the web, etc...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Inroduction ==&lt;br /&gt;
&#039;&#039;&#039;  &lt;br /&gt;
&#039;&#039;&#039;SLURM&#039;&#039;&#039; is open source scheduler and batch system which is implemented at HPCC.&lt;br /&gt;
Currently SLURM is used only for Penzias’ job management but the use of SLURM will be&lt;br /&gt;
expanded to other servers in the future.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SLURM commands:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Slurm commands resemble the commands used in Portable Batch System (SLURM). The&lt;br /&gt;
below table compares the most common SLURM and SLURM Pro commands. &lt;br /&gt;
&lt;br /&gt;
[[Image:SLURM.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A few examples follow:&lt;br /&gt;
&lt;br /&gt;
If the files are in &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;/global/u&#039;&#039;&#039;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 cd /scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 mkdir &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;job_name&amp;gt;&amp;lt;/font color&amp;gt; &amp;amp;&amp;amp; cd &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;job_name&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 cp /global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;myTask&amp;lt;/font color&amp;gt;/a.out ./&lt;br /&gt;
 cp /global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;myTask&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;mydatafile&amp;gt;&amp;lt;/font color&amp;gt; ./&lt;br /&gt;
&lt;br /&gt;
If the files are in SR (cunyZone):&lt;br /&gt;
&lt;br /&gt;
 cd /scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 mkdir &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;job_name&amp;gt;&amp;lt;/font color&amp;gt; &amp;amp;&amp;amp; cd &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;job_name&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myTask&amp;lt;/font color&amp;gt;/a.out ./&lt;br /&gt;
 iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myTask&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;mydatafile&amp;gt;&amp;lt;/font color&amp;gt; ./&lt;br /&gt;
&lt;br /&gt;
===Set up job environment===&lt;br /&gt;
Users must load the proper environment before start any job. The loaded environment wil be automatically exported to compute nodes at the time of execution. Users must use modules to load environment. For example to&lt;br /&gt;
load environment for default version of GROMACS one must type:&lt;br /&gt;
&lt;br /&gt;
 module load gromacs&lt;br /&gt;
&lt;br /&gt;
The list of available modules can be seen with command&lt;br /&gt;
 &lt;br /&gt;
 module avail&lt;br /&gt;
&lt;br /&gt;
The list of loaded modules can be seen with command&lt;br /&gt;
&lt;br /&gt;
 module list&lt;br /&gt;
&lt;br /&gt;
More information about modules is provided in &amp;quot;Modules and available third party software&amp;quot; section below.&lt;br /&gt;
 &lt;br /&gt;
===Running jobs on HPC systems running SLURM scheduler ===&lt;br /&gt;
To be able to schedule your job for execution and to actually run your job on one or more compute nodes, SLURM  needs to be instructed about your job’s parameters. These instructions are typically stored in a “job submit script”. In this section, we describe the information that needs to be included in a job submit script. The submit script typically includes &lt;br /&gt;
:•	job name &lt;br /&gt;
:•	queue name&lt;br /&gt;
:•	what compute resources (number of nodes, number of cores and the amount of memory, the amount of local scratch disk storage (applies to Andy, Herbert, and Penzias), and the number of GPUs) or other resources a job will need  &lt;br /&gt;
:•	packing option&lt;br /&gt;
:•	actual commands that need to be executed (binary that needs to be run, input\output redirection, etc.).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A pro forma job submit script is provided below.&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --partition &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;queue_name&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH -J &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;job_name&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH --mem &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;????&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 # change to the working directory &lt;br /&gt;
 cd $SLURM_WORKDIR&lt;br /&gt;
 &lt;br /&gt;
 echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;job_name&amp;gt;&amp;lt;/font color&amp;gt;&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 # actual binary (with IO redirections) and required input &lt;br /&gt;
 # parameters is called in the next line&lt;br /&gt;
 mpirun -np &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;cpus&amp;gt;&amp;lt;/font color&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;Program Name&amp;gt; &amp;lt;input_text_file&amp;gt;&amp;lt;/font color&amp;gt; &amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;output_file_name&amp;gt;&amp;lt;/font color&amp;gt; 2&amp;gt;&amp;amp;1&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note:	The &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;#SLURM&#039;&#039;&#039;&amp;lt;/font&amp;gt; string must precede every SLURM parameter.&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;#&#039;&#039;&#039;&amp;lt;/font&amp;gt; symbol in the beginning of any other line designates a comment line which is ignored by SLURM&lt;br /&gt;
&lt;br /&gt;
Explanation of SLURM attributes and parameters:&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;--partition &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;queue_name&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; Available main queue is “production” unless otherwise instructed.&lt;br /&gt;
::•	“production” is the normal queue for processing your work on Penzias.&lt;br /&gt;
::•	“development” is used when you are testing an application.  Jobs submitted to this queue can not request more than 8 cores or use more than 1 hour of total CPU time.  If the job exceeds these parameters, it will be automatically killed. “Development” queue has higher priority and thus jobs in this queue have shorter wait time. &lt;br /&gt;
::•	“interactive” is used for quick interactive tests. Jobs submitted into this queue run in an interactive terminal session on one of the compute nodes. They can not use more than 4 cores or use more than a total of 15 minutes of compute time.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;-J &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;job_name&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; The user must assign a name to each job they run.  Names can be up to 15 alphanumeric characters in length.&lt;br /&gt;
 &lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;--ntasks=&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;cpus&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;  The number of cpus (or cores) that the user wants to use.&lt;br /&gt;
&lt;br /&gt;
::•	Note:  SLURM refers to “cores” as “cpus”; currently HPCC clusters maps one thread per one core. &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;--mem &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;mem&amp;gt; &amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;  This parameter is required. It specifies how much memory is needed per job. &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;--gres &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;gpu:2&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;  The number of graphics processing units that the user wants to use on a node (This parameter is only available on PENZIAS). &lt;br /&gt;
 gpu:2 denotes requesting 2 GPU&#039;s. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Special note for MPI users&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Parameters are defined can significantly affect the run time of a job.  For example, assume you need to run a job that requires 64 cores.  This can be scheduled in a number of different ways.  For example, &lt;br /&gt;
&lt;br /&gt;
 #SBATCH --nodes 8 &lt;br /&gt;
 #SBATCH --ntasks 64&lt;br /&gt;
&lt;br /&gt;
will freely place the 8 job chunks on any nodes that have 8 cpus available.  While this may minimize communications overhead in your MPI job, SLURM will not schedule this job until 8 nodes each with 8 free cpus becomes available.  Consequently, the job may wait longer in the input queue before going into execution.&lt;br /&gt;
&lt;br /&gt;
 #SBATCH --nodes 32&lt;br /&gt;
 #SBATCH --ntasks 2&lt;br /&gt;
&lt;br /&gt;
will freely place 32 chunks of 2 cores each. There will possibly be some nodes with 4 free chunks (and 8 cores) and there may be nodes with only 1 free chunk (and 2 cores). In this case, the job ends up being more sparsely distributed across the system and hence the total averaged latency may be larger then in case with &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;nodes 8, ntasks 64&amp;lt;/font&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;mpirun -np &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;total tasks or total cpus&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;.  This script line is only to be used for MPI jobs and defines the total number of cores required for the parallel MPI job.&lt;br /&gt;
&lt;br /&gt;
The Table 2 below shows the maximum values of the various SLURM parameters by system.  Request only the resources you need as requesting maximal resources will delay your job.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
====Serial Jobs====&lt;br /&gt;
For serial jobs,&#039;&#039;&#039; &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt; --nodes 1&amp;lt;/font&amp;gt;&#039;&#039;&#039; and &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt; --ntasks 1 &amp;lt;/font&amp;gt;&#039;&#039;&#039; should be used.&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #&lt;br /&gt;
 # Typical job script to run a serial job in the production queue&lt;br /&gt;
 #&lt;br /&gt;
 #SBATCH --partition production&lt;br /&gt;
 #SBATCH -J &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;job_name&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH --nodes 1&lt;br /&gt;
 #SBATCH --ntasks 1&lt;br /&gt;
 &lt;br /&gt;
 # Change to working directory&lt;br /&gt;
 cd $SLURM_SUBMIT_DIR&lt;br /&gt;
 &lt;br /&gt;
 # Run my serial job&lt;br /&gt;
 &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;/path/to/your_binary&amp;gt;&amp;lt;/font color&amp;gt; &amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;my_output&amp;gt;&amp;lt;/font color&amp;gt; 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
====OpenMP and Threaded Parallel jobs====&lt;br /&gt;
OpenMP jobs can only run on a single virtual node.  Therefore, for OpenMP jobs,&#039;&#039;&#039; &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;place=pack&amp;lt;/font&amp;gt;&#039;&#039;&#039; and &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;select=1&amp;lt;/font&amp;gt;&#039;&#039;&#039; should be used;  &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ncpus&amp;lt;/font&amp;gt;&#039;&#039;&#039; should be set to &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;[2, 3, 4,… n]&amp;lt;/font&amp;gt;&#039;&#039;&#039; where &#039;&#039;&#039;&amp;lt;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;n&amp;lt;/font&amp;gt;&#039;&#039;&#039; must be less than or equal to the number of cores on a virtual compute node.&lt;br /&gt;
&lt;br /&gt;
Typically, OpenMP jobs will use the &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;mem&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; parameter and may request up to all the available memory on a node. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH -J Job_name&lt;br /&gt;
 #SBATCH --partition production&lt;br /&gt;
 #SBATCH --ntasks 1&lt;br /&gt;
 #SBATCH --nodes 1&lt;br /&gt;
 #SBATCH --mem=&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;mem&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH -c 4&lt;br /&gt;
 &lt;br /&gt;
 # Set OMP_NUM_THREADS to the same value as -c&lt;br /&gt;
 # with a fallback in case it isn&#039;t set.&lt;br /&gt;
 # SLURM_CPUS_PER_TASK is set to the value of -c, but only if -c is explicitly set&lt;br /&gt;
 &lt;br /&gt;
 omp_threads=1&lt;br /&gt;
 if [ -n &amp;quot;$SLURM_CPUS_PER_TASK&amp;quot; ];&lt;br /&gt;
 	 omp_threads=$SLURM_CPUS_PER_TASK&lt;br /&gt;
 else&lt;br /&gt;
 	 omp_threads=1&lt;br /&gt;
 fi&lt;br /&gt;
   &lt;br /&gt;
 mpirun -np &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;/path/to/your_binary&amp;gt;&amp;lt;/font color&amp;gt; &amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;my_output&amp;gt;&amp;lt;/font color&amp;gt; 2&amp;gt;&amp;amp;1&lt;br /&gt;
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            mpirun -np 16 &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;/path/to/your_binary&amp;gt;&amp;lt;/font color&amp;gt; &amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;my_output&amp;gt;&amp;lt;/font color&amp;gt; 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
====MPI Distributed Memory Parallel Jobs====&lt;br /&gt;
For an MPI job, &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;select=&amp;lt;/font&amp;gt;&#039;&#039;&#039; and &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ncpus=&amp;lt;/font&amp;gt;&#039;&#039;&#039; can be one or more, with &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;np= &amp;gt;/=1&amp;lt;/font&amp;gt;&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #&lt;br /&gt;
 # Typical job script to run a distributed memory MPI job in the production queue requesting 16 cores in 16 nodes.&lt;br /&gt;
 #&lt;br /&gt;
 #SBATCH --partition production&lt;br /&gt;
 #SBATCH -J &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;job_name&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH --ntasks 16&lt;br /&gt;
 #SBATCH --nodes 16&lt;br /&gt;
 #SBATCH --mem=&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;mem&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
 # Change to working directory&lt;br /&gt;
 cd $SLURM_SUBMIT_DIR&lt;br /&gt;
 &lt;br /&gt;
 # Run my 16-core MPI job&lt;br /&gt;
 &lt;br /&gt;
 mpirun -np 16 &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;/path/to/your_binary&amp;gt;&amp;lt;/font color&amp;gt; &amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;my_output&amp;gt;&amp;lt;/font color&amp;gt; 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====GPU-Accelerated Data Parallel Jobs====&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #&lt;br /&gt;
 # Typical job script to run a 1 CPU, 1 GPU batch job in the production queue&lt;br /&gt;
 # &lt;br /&gt;
 #SBATCH --partition production&lt;br /&gt;
 #SBATCH -J &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;job_name&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH --ntasks l&lt;br /&gt;
 #SBATCH --gres gpu:1&lt;br /&gt;
 #SBATCH --mem &amp;lt;fond color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;mem&amp;gt;&amp;lt;/fond color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 # Find out which compute node the job is using&lt;br /&gt;
 hostname&lt;br /&gt;
 &lt;br /&gt;
 # Change to working directory&lt;br /&gt;
 cd $SLURM_SUBMIT_DIR&lt;br /&gt;
 &lt;br /&gt;
 # Run my GPU job on a single node using 1 CPU and 1 GPU.&lt;br /&gt;
 &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;/path/to/your_binary&amp;gt;&amp;lt;/font color&amp;gt; &amp;gt;  &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;my_output&amp;gt;&amp;lt;/font color&amp;gt; 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
====Submitting jobs for execution====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE:&#039;&#039;&#039;	We do not allow users to run any production job on the login-node.  It is acceptable to do short compiles on the login node, but all other jobs must be run by handing off the “job submit script” to SLURM running on the head-node.  SLURM will then allocate resources on the compute-nodes for execution of the job. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The command to submit your “job submit script” (&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;job.script&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;) is:&lt;br /&gt;
 sbatch &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;job.script&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Running jobs on shared memory systems===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;This section in in development&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Saving output files and clean-up===&lt;br /&gt;
Normally you expect certain data in the output files as a result of a job. There are a number of things that you may want to do with these files:&lt;br /&gt;
&lt;br /&gt;
:•	Check the content of these outputs and discard them. In such case you can simply delete all unwanted data with &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;rm&amp;lt;/font&amp;gt;&#039;&#039;&#039; command.  &lt;br /&gt;
:•	Move output files to your local workstation. You can use &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;scp&amp;lt;/font&amp;gt;&#039;&#039;&#039; for small amounts of data and/or &#039;&#039;&#039;GlobusOnline&#039;&#039;&#039; for larger data transfers. &lt;br /&gt;
:•	You may also want to store the outputs at the HPCC resources. In this case you can either move your outputs to &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to &#039;&#039;&#039;SR1&#039;&#039;&#039; storage resource. &lt;br /&gt;
&lt;br /&gt;
In all cases your /scratch/&amp;lt;userid&amp;gt; directory is expected to be empty. Output files stored inside &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; can be purged at any moment (except for files that are currently being used in active jobs) located under the &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;job_name&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; directory.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=129</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=129"/>
		<updated>2022-10-27T19:44:16Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:CUNY-HPCC-HEADER-LOGO.jpg]]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the&lt;br /&gt;
campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314.  HPCC&lt;br /&gt;
goals are to: &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Create opportunities for the CUNY research community to develop new partnerships with the government and private sectors; and&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Data Storage and Management System (&#039;&#039;&#039;DSMS&#039;&#039;&#039;) which is mounted only on login node(s) of all servers. Consequently, no jobs can be started directly from &#039;&#039;&#039;DSMS&#039;&#039;&#039; storage.  Instead, all jobs must be submitted  from  a separate (fast but small) &#039;&#039;&#039;/scratch&#039;&#039;&#039; file system mounted on all computational nodes and on all login nodes.  As the name suggests, the &#039;&#039;&#039;/scratch&#039;&#039;&#039; file system is not  home directory  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
                                   [[Image:HPCC_Chart.png]]&lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  The deployed systems include:  distributed memory (also referred to as “cluster”) computers, symmetric multiprocessor (also referred as SMP) and shared memory (also reffred as NUMA machines).  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Computational Systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU. Currently, HPCC operates 3 SMP servers named &#039;&#039;&#039;Math, Cryo &#039;&#039;&#039; and &#039;&#039;&#039;Karle&#039;&#039;&#039;. Karle is a server which does not have GPU and is used for visualizations, visual analytics  and interactive MATLAB/Mathematica jobs. &#039;&#039;&#039;Math&#039;&#039;&#039; is a condominium server without GPU as well. Cryo (CPU+GPU server) is  specialized server designed to support large scale multi-core multi-GPU jobs. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprizing a  set of SMP servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. Each SMP member of the cluster is called a node. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  The main cluster at HPCC is a hybrid (CPU+GPU) cluster called &#039;&#039;&#039;Penzias&#039;&#039;&#039;.  Sixty six (66) of Penzias nodes have 2 x GPU K20m, and the 3 fat nodes (nodes with large number of CPU-cores and memory) of the cluster do not have GPU.   In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; server called &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This server does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN&#039;&#039;&#039;) is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the systems available at the HPC Center.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!System&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores/node &amp;amp; GPU&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Multi-core Processor&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
2xK20m GPU, PCIe&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|Sand Bridge, EP 2.20 GHz&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Haswell, 2.30 GHz&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|Ivy Bridge, 3 GHz&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40 8xV100 (32GB) GPU, XSM&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|Skylake, 2.40 GHz&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Skylake, 2.10 GHz&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
2xV100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|Haswell, 2.30 GHz&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|2&lt;br /&gt;
| colspan=&amp;quot;5&amp;quot; rowspan=&amp;quot;2&amp;quot; |NA&lt;br /&gt;
|-&lt;br /&gt;
|MHN&lt;br /&gt;
|Login Nodes&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Partitions and jobs==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in partitions. The main partition is called production. This is routing partition which distributes the jobs in several sub-partitions depend on job’s requirements. Thus the serial job submitted in &#039;&#039;&#039;production&#039;&#039;&#039; will land in &#039;&#039;&#039;partsequential&#039;&#039;&#039; partition.  No PBS Pro scripts should be ever used and all existing PBS scripts must be converted to SLURM before use. The table below shows the limitations of the partitions.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
|-&lt;br /&gt;
|production&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partedu&lt;br /&gt;
|16&lt;br /&gt;
|2&lt;br /&gt;
|216&lt;br /&gt;
|72 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partcryo&lt;br /&gt;
|40&lt;br /&gt;
|40&lt;br /&gt;
|40&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partmath&lt;br /&gt;
|128&lt;br /&gt;
|128&lt;br /&gt;
|128&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partmatlab&lt;br /&gt;
|1972&lt;br /&gt;
|50&lt;br /&gt;
|1972&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partdev&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|4 Hours&lt;br /&gt;
|}&lt;br /&gt;
o	&#039;&#039;&#039;production&#039;&#039;&#039; is the main partition with assigned resources across all servers (except Math and Cryo).It is routing partition so the actual job(s) will be placed in proper sub-partition automatically. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
 &lt;br /&gt;
o	&#039;&#039;&#039;partedu&#039;&#039;&#039;  partition is only for education. Assigned resources are on educational server Herbert. Partedu is accessible only to students (graduate and/or undergraduate) and their professors who are registered for a class supported by HPCC. Access to this partition is limited by the duration of the class. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partcryo&#039;&#039;&#039; is partition used to start jobs on CRYO server. Users whose projects require and/or benefit from availability  of 8 GPU interconnected via SXM interface (not PCIe) must apply for access to this partition at hpchelp@csi.cuny.edu. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partmatlab&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. Note however that parallel toolbox programs  can be submitted via production partition, but only as thread parallel jobs. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
The second and fourth Tuesday mornings in the month from 8:00AM to 12PM are normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  &amp;lt;br/ &amp;gt;&lt;br /&gt;
Unplanned maintenance to remedy system related problems may be scheduled as needed.  Reasonable attempts will be made to inform users running on those systems when these needs arise.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are encouraged to read this Wiki carefully.  In particular, the sections on compiling and running&lt;br /&gt;
parallel programs, and the section on the SLURM batch queueing system will give you the essential&lt;br /&gt;
knowledge needed to use the CUNY HPC Center systems.  We have strived to maintain the most uniform&lt;br /&gt;
user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  Still, there are some differences, particularly with the SGI (ANDY) and Cray (SALK)&lt;br /&gt;
systems.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly&lt;br /&gt;
schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit&lt;br /&gt;
is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and&lt;br /&gt;
architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the&lt;br /&gt;
CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures&lt;br /&gt;
at formal classes throughout the CUNY campuses.  &lt;br /&gt;
&lt;br /&gt;
Users with further questions or requiring immediate assistance in use of the systems should create a ticket using their HPC account login at:&lt;br /&gt;
&lt;br /&gt;
   [https://hpchelp.csi.cuny.edu hpchelp.csi.cuny.edu]&lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
An old version of the user manual can be downloaded at: http://cunyhpc.csi.cuny.edu/publications/User_Manual.pdf.  Note that this manual provides PBS batch  scripts as examples. Currently CUNY-HPCC uses SLURM so users must check the brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=PHRAP-PHRED&amp;diff=128</id>
		<title>PHRAP-PHRED</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=PHRAP-PHRED&amp;diff=128"/>
		<updated>2022-10-27T19:43:41Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHRAP is a program for shotgun sequence assembly, but it can also be used for small sequence&lt;br /&gt;
assemblies.  Its key features include its use of data quality information, both direct (from phred trace&lt;br /&gt;
analysis) and indirect (from pairwise read comparisons), to delineate the likely accurate base calls in each&lt;br /&gt;
read. This helps discriminate repeats. It permits the use of the full reads in assembly, and allows a highly&lt;br /&gt;
accurate consensus sequence to be generated. A probability of error is computed for each consensus&lt;br /&gt;
sequence position, which can be used to focus human editing on particular regions.  This helps to automate&lt;br /&gt;
decision-making about where additional data are needed provides users of the final sequence with information&lt;br /&gt;
about local variations in quality. The PHRAP documentation is available here [http://www.phrap.org/phredphrap/general.html]&lt;br /&gt;
&lt;br /&gt;
PHRED reads DNA sequencer trace data, calls bases, assigns quality values to the bases, and&lt;br /&gt;
writes the base calls and quality values to output files.  Phred can read trace data from chromatogram&lt;br /&gt;
files in the SCF, ABI, and ESD formats. It automatically determines the file format, and whether&lt;br /&gt;
the chromatogram file was compressed using gzip, bzip2, or UNIX compress.  After calling bases,&lt;br /&gt;
phred writes the sequences to files in either FASTA format, the format suitable for XBAP, PHD format,&lt;br /&gt;
or the SCF format.  Quality values for the bases are written to FASTA format files or PHD files, which&lt;br /&gt;
can be used by the phrap sequence assembly program in order to increase the accuracy of the&lt;br /&gt;
assembled sequence.  The PHRED documentation is available here [http://www.phrap.org/phredphrap/phred.html]&lt;br /&gt;
&lt;br /&gt;
All the tools referenced above are installed at the CUNY HPC Center on both KARLE and ANDY.  They&lt;br /&gt;
may be run directly on KARLE, in either command-line interactive mode, in the background (Unix &lt;br /&gt;
batch), or within the CONSED GUI framework using the &#039;phredPhrap&#039; scripting tool.  The run times&lt;br /&gt;
are generally short.  On ANDY, they should be run from the within the CUNY HPC Center SLURM batch&lt;br /&gt;
processing frame work if the jobs will take more than a minute or two of wall-clock time.  On both &lt;br /&gt;
KARLE and ANDY.&lt;br /&gt;
&lt;br /&gt;
Below is a sample SLURM batch script for ANDY that reproduces each step the CONSED &#039;phredPhrap&#039; script&lt;br /&gt;
completes when it is run on KARLE.  This script is meant to give you an idea of how any of these tools can&lt;br /&gt;
be run in batch mode on ANDY.  Not all these steps are always required.  SLURM scripts-jobs that run only one&lt;br /&gt;
or two of the tools present in this example can also be constructed.  Details on the command-line options&lt;br /&gt;
for each tools can be found the the manuals pointed to above.&lt;br /&gt;
&lt;br /&gt;
Prior to running this example, a directory with example starting input data and the environment for each tool&lt;br /&gt;
must be set up.  One can obtain the standard test case from the PHRED installation tree on ANDY as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$mkdir mytest&lt;br /&gt;
$&lt;br /&gt;
$cd mytest&lt;br /&gt;
$&lt;br /&gt;
$tar -xvf /share/apps/phred/default/data/STD.tar&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will created a collection of directories, some with input files, that will be referenced by the SLURM&lt;br /&gt;
batch script.  These directories are list here:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[richard.walsh@andy standard]$ls -l &lt;br /&gt;
total 28&lt;br /&gt;
drwx------ 2 richard.walsh hpcadmin 4096 2012-12-28 17:37 chromat_dir&lt;br /&gt;
drwx------ 2 richard.walsh hpcadmin 4096 2012-12-28 17:37 chromats_to_add&lt;br /&gt;
drwx------ 2 richard.walsh hpcadmin 4096 2013-01-02 12:56 edit_dir&lt;br /&gt;
drwx------ 2 richard.walsh hpcadmin 4096 2013-01-25 13:51 phdball_dir&lt;br /&gt;
drwx------ 3 richard.walsh hpcadmin 4096 2013-02-27 12:38 phd_dir&lt;br /&gt;
drwx------ 2 richard.walsh hpcadmin 4096 2013-01-25 13:51 sff_dir&lt;br /&gt;
drwx------ 2 richard.walsh hpcadmin 4096 2013-01-25 13:51 solexa_dir&lt;br /&gt;
[richard.walsh@andy standard]$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next, the environment for each of the required tools must be loaded using the modules command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&lt;br /&gt;
$module load phred&lt;br /&gt;
$module load phrap&lt;br /&gt;
$module load consed&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Although CONSED is not used directly in this SLURM script, files in its installation tree are referenced&lt;br /&gt;
and its module must therefore be loaded.  With the above steps completed, the following SLURM &lt;br /&gt;
batch script can be run on ANDY:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name PHRED_PHRAP.job&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUMBIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Echoing the location of the phred_phrap parameter file&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;Using parameter file: $PHRED_PARAMETER_FILE&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Define the location of the consed screen files for cross_match&lt;br /&gt;
export SCREEN_PATH=${CONSED_HOME}/lib/screenLibs&lt;br /&gt;
&lt;br /&gt;
# Just point to the serial executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin PHRED-PHRAP Batch Serial Run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Running phred ... &amp;quot;&lt;br /&gt;
phred -id chromat_dir -pd phd_dir &amp;gt; phred.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;Done ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Running phd2fasta ... &amp;quot;&lt;br /&gt;
phd2fasta -id phd_dir -os seqs_fasta -oq seqs_fasta.screen.qual &amp;gt; phd2fasta.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;Done ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Running cross_match ... &amp;quot;&lt;br /&gt;
cross_match seqs_fasta ${SCREEN_PATH}/vector.seq -minmatch 12 -minscore 20 -screen &amp;gt; cross_match.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;Done ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Running phrap ... &amp;quot;&lt;br /&gt;
phrap seqs_fasta.screen -new_ace &amp;gt; phrap.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;Done ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   PHRED-PHRAP Batch Serial Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This script should be copied into a file in the same directory that you &#039;untar-ed&#039; the files&lt;br /&gt;
in above (here the name is &#039;mytest&#039;).  This would be typically be done in a editor like &#039;vi&#039;&lt;br /&gt;
or &#039;emacs&#039;.  Assuming that the name given to this SLURM script file is &#039;phred_phrap.job&#039;, the&lt;br /&gt;
SLURM job can be submitted with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub phred_phrap.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This script walks the original sequence found in the &#039;chromat_dir&#039; through all of the steps&lt;br /&gt;
that the &#039;predPhrap&#039; script would complete interactively on KARLE.  Notice that four distinct&lt;br /&gt;
programs are run, each with their own set of options.  They produce all the required &#039;seqs_fasta&#039;&lt;br /&gt;
files required for viewing in CONSED.  Users may wish to run only one of the tools in which&lt;br /&gt;
case only one execution line for perhaps &#039;phred&#039; or &#039;phd2fasta&#039; would be required in the script.&lt;br /&gt;
&lt;br /&gt;
It should take less than minutes to run and will produce SLURM output and error files beginning with&lt;br /&gt;
the job name &#039;PHRED_PHRAP&#039; along with a number of tool-specific files output files.  The primary&lt;br /&gt;
application results will be written into the user-specified file at the end of each command&lt;br /&gt;
line after the greater-than sign. Here, four executables are run and write named &#039;XXX.out&#039; output&lt;br /&gt;
files. The expression &#039;2&amp;gt;&amp;amp;1&#039; combines Unix standard output from the program with Unix standard error.&lt;br /&gt;
Users should always explicitly specify the name of the application&#039;s output file in this way to ensure&lt;br /&gt;
that it is written directly into the user&#039;s working directory which has much more disk space than the&lt;br /&gt;
SLURM spool directory on /var.&lt;br /&gt;
&lt;br /&gt;
Details on the meaning of the SLURM script options are covered above in the SLURM section.  The most important lines&lt;br /&gt;
are the &#039;#SLURMnodes=1 ntasks=1 mem=2880&#039;.  The first instructs&lt;br /&gt;
SLURM to select 1 resource &#039;chunk&#039;  with 1 processors (cores) and 2,880 MBs of memory in it for the job.&lt;br /&gt;
The second instructs SLURM to place this chunk on any compute node with the required resources available. &lt;br /&gt;
All the jobs run with this script are assume by SLURM to be serial jobs.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=File:CUNY-HPCC-CORNER-LOGO.jpg&amp;diff=127</id>
		<title>File:CUNY-HPCC-CORNER-LOGO.jpg</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=File:CUNY-HPCC-CORNER-LOGO.jpg&amp;diff=127"/>
		<updated>2022-10-27T19:33:48Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications_Environment/gromacs&amp;diff=126</id>
		<title>Applications Environment/gromacs</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Applications_Environment/gromacs&amp;diff=126"/>
		<updated>2022-10-27T19:33:39Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h1&amp;gt;&amp;lt;b&amp;gt;GROMACS&amp;lt;/b&amp;gt;&amp;lt;/h1&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&amp;lt;b&amp;gt;Description:&amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt; &amp;lt;b&amp;gt;&amp;lt;i&amp;gt;GROMACS&amp;lt;/i&amp;gt;&amp;lt;/b&amp;gt; is a highly optimized, fast, Molecular Dynamics (MD) simulation package primarily used for research on proteins, lipids, and polymers. The package can also be applied to a wide variety of chemical and biological research questions. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&amp;lt;b&amp;gt;Additional Notes:&amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt;  &amp;lt;b&amp;gt;&amp;lt;i&amp;gt;GROMACS 5.X&amp;lt;/i&amp;gt;&amp;lt;/b&amp;gt; represents the newest version of GROMACS and has been optimized for both parallel MPI and GPU performance.  This version is installed on PENZIAS and is recommended. GROMACS 4.X is also installed. There are significant differences between versions of GROMACS, which may affect the simulation process.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&amp;lt;b&amp;gt;Availability:&amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt; PENZIAS and APPEL&lt;br /&gt;
::&amp;lt;font color=black&amp;gt;&amp;lt;b&amp;gt;MPI:&amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt; Yes&lt;br /&gt;
::&amp;lt;font color=black&amp;gt;&amp;lt;b&amp;gt;SMP:&amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt; No&lt;br /&gt;
::&amp;lt;font color=black&amp;gt;&amp;lt;b&amp;gt;GPU:&amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt; Yes on PENZIAS, No on APPEL&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&amp;lt;b&amp;gt;Module file:&amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt; gromacs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&amp;lt;b&amp;gt;Citation:&amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt; Include in published paper the following citation regarding use of &amp;lt;b&amp;gt;&amp;lt;i&amp;gt;GROMACS&amp;lt;/i&amp;gt;&amp;lt;/b&amp;gt;:&lt;br /&gt;
1. Berendsen, et al. (1995) Comp. Phys. Comm. 91: 43-56, Lindahl, et al. (2001) J. Mol. Model. 7: 306-317, Van der Spoel, et al. (2005) J. Comput. Chem. 26: 1701-1718, Hess, et al. (2008) J. Chem. Theory Comput. 4: 435-447, Pronk, et al. (2013) Bioinformatics 29 845-854.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&amp;lt;b&amp;gt;Documentation:&amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt; http://www.gromacs.org/Documentation&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;font color=red&amp;gt;&amp;lt;b&amp;gt;Tutorials:&amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt;  http://www.gromacs.org/Documentation/Tutorials&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&amp;lt;b&amp;gt;Related Packages:&amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
::LAMMPS&lt;br /&gt;
::NWChem&lt;br /&gt;
::TINKER  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&amp;lt;b&amp;gt;General notes of use: &amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
The GROMACS package comes with a set of tools, which can be used to prepare the molecular system for simulation. genbox tool is no longer available in GROMACS 5.X. Also, the tools in GROMACS 5.X cannot be used directly, (as they were with GROMACS 4.X), but via wrapper called gmx (i.e. gmx pdb2gmx). Users should consult the tutorial(s)/documentation specific for the particular version.  Also there are significant changes in algorithms used in GROMACS 5.X vs. those in GROMACS 4.X.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&amp;lt;b&amp;gt;GROMACS parallelization schemes:	&amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
Version 5.X on PENZIAS utilizes MPI parallelism, OpenMP parallelism and GPU-direct mechanism (P2P) to run simulations across several GPUs from different nodes. GROMACS 4.X installed on ANDY utilizes OpenMP (in node parallelism, SMP) and MPI parallelism. The GPU enabled 4.X version on PENZIAS utilizes MPI and OpenMP parallelism as well, but also uses GPUs as accelerator. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&amp;lt;b&amp;gt;Use of GPUs: &amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
In the GPU enabled version of GROMACS, the Particle Mesh Ewald (PME) calculations are always done on CPU, while particle-particle (PP) interactions are done on GPU. As such, the optimal ratio between number of CPUs and number of GPUs depends on type of molecular system simulated. For X86_64 architecture the recommended number of atoms per CPU is typically about 100 to 150. As such, and as a rule of thumb, for a computer with K20 GPUs the ratio between the number of CPUs and GPUs should be at least 4:1, but the optimal number depends on molecular system simulated.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&amp;lt;b&amp;gt;Use: &amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt; &lt;br /&gt;
&lt;br /&gt;
PENZIAS:&lt;br /&gt;
 module load gromacs/&amp;lt;font color=red&amp;gt;x.x.x&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ANDY:&lt;br /&gt;
 module load gromacs&lt;br /&gt;
 &amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For meaning and proper choice of values in &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color= red&amp;gt;&amp;lt;b&amp;gt;&amp;lt;chunks&amp;gt;&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font color&amp;gt; and &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=red&amp;gt;&amp;lt;b&amp;gt;&amp;lt;cpus&amp;gt;&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font color&amp;gt; fields please read the section [[Submitting_Jobs|“Writing a job submit script.”]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&amp;lt;b&amp;gt;Example:&amp;lt;/b&amp;gt;&amp;lt;/font color&amp;gt;  The following example SLURM scripts are described below: (1) General Script Template (2) Penzias (a) 4 cpus and 1 gpu, (b) 4 cpus and 2 gpus (c) 2 mpi threads per gpu and a submit script for (3) Andy.&lt;br /&gt;
==General submit script template==&lt;br /&gt;
&lt;br /&gt;
The sample submit script should be modified based upon sysem (PENZIAS or ANDY) and desired CPU/GPU usage.&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash &lt;br /&gt;
 #SBATCH --job-name water_job &lt;br /&gt;
 #SBATCH --partition production &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;font color=red&amp;gt;&amp;lt;chunks&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH --ntasks=&amp;lt;font color=red&amp;gt;&amp;lt;cpus&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH --gres=gpu=&amp;lt;font color=red&amp;gt;&amp;lt;gpus&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH --mem=&amp;lt;font color=red&amp;gt;&amp;lt;memory_mb&amp;gt;&amp;lt;/font color&amp;gt; &lt;br /&gt;
    &lt;br /&gt;
 &lt;br /&gt;
 # You must explicitly change to your working directory in SLURM &lt;br /&gt;
 &lt;br /&gt;
 cd $SLURM_SUBMIT_DIR  &lt;br /&gt;
 &lt;br /&gt;
 mpirun -np &amp;lt;font color=red&amp;gt;&amp;lt;chunks*cpus&amp;gt;&amp;lt;/font color&amp;gt; gmx mdrun –s topol.tpr –o md_para.trr –c md_para.gro –e md_para.edr –g md_para.log –gpu_id 0 &amp;gt; water.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
==PENZIAS==&lt;br /&gt;
=====A. Example Submit Script: 4 CPUs and 1 GPU=====&lt;br /&gt;
In the script below, the SLURM scheduler is asked to schedule 4 CPUs and 1 GPU for a job. &lt;br /&gt;
&lt;br /&gt;
On PENZIAS, there are 2 versions of GROMACS which are GPU enabled: GROMACS 5.X and GROMACS 4.X.  The 5.x uses more advanced communication protocol between GPU and has improved CPU performance.  The following SLURM script can be used as a model on how to run a MPI parallel GROMACS 5.X job with GPUs on PENZIAS. First, load the module for GROMACS with command: &lt;br /&gt;
 module load gromacs/5.0.4&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash &lt;br /&gt;
 #SBATCH --job-name water_job &lt;br /&gt;
 #SBATCH --partition production &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;font color=red&amp;gt;&amp;lt;chunks&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH --ntasks=&amp;lt;font color=red&amp;gt;&amp;lt;cpus&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH --gres=gpu:1&lt;br /&gt;
 #SBATCH --mem=3660mb   &lt;br /&gt;
 &lt;br /&gt;
 # You must explicitly change to your working directory in SLURM &lt;br /&gt;
 &lt;br /&gt;
 cd $SLURM_SUBMIT_DIR  &lt;br /&gt;
 &lt;br /&gt;
 mpirun -np 4 gmx mdrun –s topol.tpr –o md_para.trr –c md_para.gro –e md_para.edr –g md_para.log –gpu_id 0 &amp;gt; water.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
=====B. Example Submit Script: 4 CPUs and 2 GPUs=====&lt;br /&gt;
On PENZIAS, each virtual node has 2 GPUs. The following script shows how to use 2 GPUs: &lt;br /&gt;
 &lt;br /&gt;
 #!/bin/bash &lt;br /&gt;
 #SBATCH --job-name water_job &lt;br /&gt;
 #SBATCH --partition production &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;font color=red&amp;gt;&amp;lt;chunks&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH --ntasks=&amp;lt;font color=red&amp;gt;&amp;lt;cpus&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH --gres=gpu:1&lt;br /&gt;
 #SBATCH --mem=3660 &lt;br /&gt;
 &lt;br /&gt;
 cd $SLURM_SUBMIT_DIR   &lt;br /&gt;
 &lt;br /&gt;
 mpirun -np 32 gmx mdrun –s topol.tpr –o md_para.trr –c md_para.gro –e md_para.edr –g md_para.log –gpu_id 01 &amp;gt; water.out 2&amp;gt;&amp;amp;1 &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
The above script will allocate 32 CPUs and 8 GPUs across several nodes.&lt;br /&gt;
&lt;br /&gt;
=====C. Example Submit Script: 2 MPI threads per GPU=====&lt;br /&gt;
For some molecular systems is beneficial to submit two MPI threads per GPU. On PENZIAS that can be achieved with small change in above script in the field of –gpu_id (i.e., adding 0011):&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash &lt;br /&gt;
 #SBATCH --job-name water_job &lt;br /&gt;
 #SBATCH --partition production &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;font color=red&amp;gt;&amp;lt;chunks&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH --ntasks=&amp;lt;font color=red&amp;gt;&amp;lt;cpus&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
 #SBATCH --gres=gpu:1&lt;br /&gt;
 #SBATCH --mem=3660 &lt;br /&gt;
 &lt;br /&gt;
 cd $SLURM_SUBMIT_DIR   &lt;br /&gt;
 &lt;br /&gt;
 mpirun -np 32 gmx mdrun –s topol.tpr –o md_para.trr –c md_para.gro –e md_para.edr –g md_para.log –gpu_id 0011 &amp;gt; water.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&lt;br /&gt;
==APPEL==&lt;br /&gt;
The following SLURM script can be used as a model how to run MPI only simulations with GROMACS 4.X on ANDY. First load the module for the GROMACS with the command:&lt;br /&gt;
 module load gromacs &lt;br /&gt;
&lt;br /&gt;
The SLURM script is as follows:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash &lt;br /&gt;
 #SBATCH --job-name GRMX_32bit &lt;br /&gt;
 #SBATCH --partition production_qdr &lt;br /&gt;
 #SBATCH --nodes=16&lt;br /&gt;
 #SBATCH --ntasks&lt;br /&gt;
 #SBATCH --mem=2880 &lt;br /&gt;
   &lt;br /&gt;
 &lt;br /&gt;
 # You must explicitly change to your working directory in SLURM &lt;br /&gt;
 &lt;br /&gt;
 cd $SLURM_SUBMIT_DIR   &lt;br /&gt;
  &lt;br /&gt;
 &lt;br /&gt;
 mpirun -np 16 mdrun_mpi -px -pf -s md_para.tpr -o md_para.trr -c md_para.gro -e md_para.edr  -g md_para.log &amp;gt; GRMX_32bit.out 2&amp;gt;&amp;amp;1 &lt;br /&gt;
&lt;br /&gt;
In the above example, the topology file (.tpr), trajectory file (.trr) coordinates file (.gro) etc. are all prepared with GROMACS tools (not shown here).&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=MRBAYES&amp;diff=125</id>
		<title>MRBAYES</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=MRBAYES&amp;diff=125"/>
		<updated>2022-10-27T19:33:01Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;MrBayes is installed on ANDY and PENZIAS (GPU enabled version). In order to &lt;br /&gt;
set-up the environment required to run MrBayes corresponding module needs to be loaded first. This is done with &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mrbayes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running MrBayes is a two-step process that first requires the creation of the&lt;br /&gt;
NEXUS-formatted MrBayes input file and then the SLURM Pro script to run it.  MrBayes can be&lt;br /&gt;
run in serial, MPI-parallel, or GPU-accelerated mode (on PENZIAS only).&lt;br /&gt;
&lt;br /&gt;
Here is NEXUS input file (&#039;&#039;&#039;primates.nex&#039;&#039;&#039;) which includes both a DATA block and a MRBAYES block.&lt;br /&gt;
The MRBAYES block simply contains the MrBayes runtime commands terminated with a semi-colon.&lt;br /&gt;
The example below shows 12 mitochondrial DNA sequences of primates and yields at least 1,000&lt;br /&gt;
samples from the posterior probability distribution. If you need more detail on generating&lt;br /&gt;
the NEXUS file or on MrBayes in general, please check the MrBayes Wiki here [http://mrbayes.sourceforge.net]&lt;br /&gt;
and the online manual here [http://mrbayes.scs.fsu.edu/wiki/index.php/Main_Page on-line manual].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#NEXUS&lt;br /&gt;
&lt;br /&gt;
begin data;&lt;br /&gt;
dimensions ntax=12 nchar=898;&lt;br /&gt;
format datatype=dna interleave=no gap=-;&lt;br /&gt;
matrix&lt;br /&gt;
Tarsius_syrichta	AAGTTTCATTGGAGCCACCACTCTTATAATTGCCCATGGCCTCACCTCCTCCCTATTATTTTGCCTAGCAAATACAAACTACGAACGAGTCCACAGTCGAACAATAGCACTAGCCCGTGGCCTTCAAACCCTATTACCTCTTGCAGCAACATGATGACTCCTCGCCAGCTTAACCAACCTGGCCCTTCCCCCAACAATTAATTTAATCGGTGAACTGTCCGTAATAATAGCAGCATTTTCATGGTCACACCTAACTATTATCTTAGTAGGCCTTAACACCCTTATCACCGCCCTATATTCCCTATATATACTAATCATAACTCAACGAGGAAAATACACATATCATATCAACAATATCATGCCCCCTTTCACCCGAGAAAATACATTAATAATCATACACCTATTTCCCTTAATCCTACTATCTACCAACCCCAAAGTAATTATAGGAACCATGTACTGTAAATATAGTTTAAACAAAACATTAGATTGTGAGTCTAATAATAGAAGCCCAAAGATTTCTTATTTACCAAGAAAGTA-TGCAAGAACTGCTAACTCATGCCTCCATATATAACAATGTGGCTTTCTT-ACTTTTAAAGGATAGAAGTAATCCATCGGTCTTAGGAACCGAAAA-ATTGGTGCAACTCCAAATAAAAGTAATAAATTTATTTTCATCCTCCATTTTACTATCACTTACACTCTTAATTACCCCATTTATTATTACAACAACTAAAAAATATGAAACACATGCATACCCTTACTACGTAAAAAACTCTATCGCCTGCGCATTTATAACAAGCCTAGTCCCAATGCTCATATTTCTATACACAAATCAAGAAATAATCATTTCCAACTGACATTGAATAACGATTCATACTATCAAATTATGCCTAAGCTT&lt;br /&gt;
Lemur_catta		AAGCTTCATAGGAGCAACCATTCTAATAATCGCACATGGCCTTACATCATCCATATTATTCTGTCTAGCCAACTCTAACTACGAACGAATCCATAGCCGTACAATACTACTAGCACGAGGGATCCAAACCATTCTCCCTCTTATAGCCACCTGATGACTACTCGCCAGCCTAACTAACCTAGCCCTACCCACCTCTATCAATTTAATTGGCGAACTATTCGTCACTATAGCATCCTTCTCATGATCAAACATTACAATTATCTTAATAGGCTTAAATATGCTCATCACCGCTCTCTATTCCCTCTATATATTAACTACTACACAACGAGGAAAACTCACATATCATTCGCACAACCTAAACCCATCCTTTACACGAGAAAACACCCTTATATCCATACACATACTCCCCCTTCTCCTATTTACCTTAAACCCCAAAATTATTCTAGGACCCACGTACTGTAAATATAGTTTAAA-AAAACACTAGATTGTGAATCCAGAAATAGAAGCTCAAAC-CTTCTTATTTACCGAGAAAGTAATGTATGAACTGCTAACTCTGCACTCCGTATATAAAAATACGGCTATCTCAACTTTTAAAGGATAGAAGTAATCCATTGGCCTTAGGAGCCAAAAA-ATTGGTGCAACTCCAAATAAAAGTAATAAATCTATTATCCTCTTTCACCCTTGTCACACTGATTATCCTAACTTTACCTATCATTATAAACGTTACAAACATATACAAAAACTACCCCTATGCACCATACGTAAAATCTTCTATTGCATGTGCCTTCATCACTAGCCTCATCCCAACTATATTATTTATCTCCTCAGGACAAGAAACAATCATTTCCAACTGACATTGAATAACAATCCAAACCCTAAAACTATCTATTAGCTT&lt;br /&gt;
Homo_sapiens		AAGCTTCACCGGCGCAGTCATTCTCATAATCGCCCACGGGCTTACATCCTCATTACTATTCTGCCTAGCAAACTCAAACTACGAACGCACTCACAGTCGCATCATAATCCTCTCTCAAGGACTTCAAACTCTACTCCCACTAATAGCTTTTTGATGACTTCTAGCAAGCCTCGCTAACCTCGCCTTACCCCCCACTATTAACCTACTGGGAGAACTCTCTGTGCTAGTAACCACGTTCTCCTGATCAAATATCACTCTCCTACTTACAGGACTCAACATACTAGTCACAGCCCTATACTCCCTCTACATATTTACCACAACACAATGGGGCTCACTCACCCACCACATTAACAACATAAAACCCTCATTCACACGAGAAAACACCCTCATGTTCATACACCTATCCCCCATTCTCCTCCTATCCCTCAACCCCGACATCATTACCGGGTTTTCCTCTTGTAAATATAGTTTAACCAAAACATCAGATTGTGAATCTGACAACAGAGGCTTA-CGACCCCTTATTTACCGAGAAAGCT-CACAAGAACTGCTAACTCATGCCCCCATGTCTAACAACATGGCTTTCTCAACTTTTAAAGGATAACAGCTATCCATTGGTCTTAGGCCCCAAAAATTTTGGTGCAACTCCAAATAAAAGTAATAACCATGCACACTACTATAACCACCCTAACCCTGACTTCCCTAATTCCCCCCATCCTTACCACCCTCGTTAACCCTAACAAAAAAAACTCATACCCCCATTATGTAAAATCCATTGTCGCATCCACCTTTATTATCAGTCTCTTCCCCACAACAATATTCATGTGCCTAGACCAAGAAGTTATTATCTCGAACTGACACTGAGCCACAACCCAAACAACCCAGCTCTCCCTAAGCTT&lt;br /&gt;
Pan	  		AAGCTTCACCGGCGCAATTATCCTCATAATCGCCCACGGACTTACATCCTCATTATTATTCTGCCTAGCAAACTCAAATTATGAACGCACCCACAGTCGCATCATAATTCTCTCCCAAGGACTTCAAACTCTACTCCCACTAATAGCCTTTTGATGACTCCTAGCAAGCCTCGCTAACCTCGCCCTACCCCCTACCATTAATCTCCTAGGGGAACTCTCCGTGCTAGTAACCTCATTCTCCTGATCAAATACCACTCTCCTACTCACAGGATTCAACATACTAATCACAGCCCTGTACTCCCTCTACATGTTTACCACAACACAATGAGGCTCACTCACCCACCACATTAATAACATAAAGCCCTCATTCACACGAGAAAATACTCTCATATTTTTACACCTATCCCCCATCCTCCTTCTATCCCTCAATCCTGATATCATCACTGGATTCACCTCCTGTAAATATAGTTTAACCAAAACATCAGATTGTGAATCTGACAACAGAGGCTCA-CGACCCCTTATTTACCGAGAAAGCT-TATAAGAACTGCTAATTCATATCCCCATGCCTGACAACATGGCTTTCTCAACTTTTAAAGGATAACAGCCATCCGTTGGTCTTAGGCCCCAAAAATTTTGGTGCAACTCCAAATAAAAGTAATAACCATGTATACTACCATAACCACCTTAACCCTAACTCCCTTAATTCTCCCCATCCTCACCACCCTCATTAACCCTAACAAAAAAAACTCATATCCCCATTATGTGAAATCCATTATCGCGTCCACCTTTATCATTAGCCTTTTCCCCACAACAATATTCATATGCCTAGACCAAGAAGCTATTATCTCAAACTGGCACTGAGCAACAACCCAAACAACCCAGCTCTCCCTAAGCTT&lt;br /&gt;
Gorilla   		AAGCTTCACCGGCGCAGTTGTTCTTATAATTGCCCACGGACTTACATCATCATTATTATTCTGCCTAGCAAACTCAAACTACGAACGAACCCACAGCCGCATCATAATTCTCTCTCAAGGACTCCAAACCCTACTCCCACTAATAGCCCTTTGATGACTTCTGGCAAGCCTCGCCAACCTCGCCTTACCCCCCACCATTAACCTACTAGGAGAGCTCTCCGTACTAGTAACCACATTCTCCTGATCAAACACCACCCTTTTACTTACAGGATCTAACATACTAATTACAGCCCTGTACTCCCTTTATATATTTACCACAACACAATGAGGCCCACTCACACACCACATCACCAACATAAAACCCTCATTTACACGAGAAAACATCCTCATATTCATGCACCTATCCCCCATCCTCCTCCTATCCCTCAACCCCGATATTATCACCGGGTTCACCTCCTGTAAATATAGTTTAACCAAAACATCAGATTGTGAATCTGATAACAGAGGCTCA-CAACCCCTTATTTACCGAGAAAGCT-CGTAAGAGCTGCTAACTCATACCCCCGTGCTTGACAACATGGCTTTCTCAACTTTTAAAGGATAACAGCTATCCATTGGTCTTAGGACCCAAAAATTTTGGTGCAACTCCAAATAAAAGTAATAACTATGTACGCTACCATAACCACCTTAGCCCTAACTTCCTTAATTCCCCCTATCCTTACCACCTTCATCAATCCTAACAAAAAAAGCTCATACCCCCATTACGTAAAATCTATCGTCGCATCCACCTTTATCATCAGCCTCTTCCCCACAACAATATTTCTATGCCTAGACCAAGAAGCTATTATCTCAAGCTGACACTGAGCAACAACCCAAACAATTCAACTCTCCCTAAGCTT&lt;br /&gt;
Pongo     		AAGCTTCACCGGCGCAACCACCCTCATGATTGCCCATGGACTCACATCCTCCCTACTGTTCTGCCTAGCAAACTCAAACTACGAACGAACCCACAGCCGCATCATAATCCTCTCTCAAGGCCTTCAAACTCTACTCCCCCTAATAGCCCTCTGATGACTTCTAGCAAGCCTCACTAACCTTGCCCTACCACCCACCATCAACCTTCTAGGAGAACTCTCCGTACTAATAGCCATATTCTCTTGATCTAACATCACCATCCTACTAACAGGACTCAACATACTAATCACAACCCTATACTCTCTCTATATATTCACCACAACACAACGAGGTACACCCACACACCACATCAACAACATAAAACCTTCTTTCACACGCGAAAATACCCTCATGCTCATACACCTATCCCCCATCCTCCTCTTATCCCTCAACCCCAGCATCATCGCTGGGTTCGCCTACTGTAAATATAGTTTAACCAAAACATTAGATTGTGAATCTAATAATAGGGCCCCA-CAACCCCTTATTTACCGAGAAAGCT-CACAAGAACTGCTAACTCTCACT-CCATGTGTGACAACATGGCTTTCTCAGCTTTTAAAGGATAACAGCTATCCCTTGGTCTTAGGATCCAAAAATTTTGGTGCAACTCCAAATAAAAGTAACAGCCATGTTTACCACCATAACTGCCCTCACCTTAACTTCCCTAATCCCCCCCATTACCGCTACCCTCATTAACCCCAACAAAAAAAACCCATACCCCCACTATGTAAAAACGGCCATCGCATCCGCCTTTACTATCAGCCTTATCCCAACAACAATATTTATCTGCCTAGGACAAGAAACCATCGTCACAAACTGATGCTGAACAACCACCCAGACACTACAACTCTCACTAAGCTT&lt;br /&gt;
Hylobates 		AAGCTTTACAGGTGCAACCGTCCTCATAATCGCCCACGGACTAACCTCTTCCCTGCTATTCTGCCTTGCAAACTCAAACTACGAACGAACTCACAGCCGCATCATAATCCTATCTCGAGGGCTCCAAGCCTTACTCCCACTGATAGCCTTCTGATGACTCGCAGCAAGCCTCGCTAACCTCGCCCTACCCCCCACTATTAACCTCCTAGGTGAACTCTTCGTACTAATGGCCTCCTTCTCCTGGGCAAACACTACTATTACACTCACCGGGCTCAACGTACTAATCACGGCCCTATACTCCCTTTACATATTTATCATAACACAACGAGGCACACTTACACACCACATTAAAAACATAAAACCCTCACTCACACGAGAAAACATATTAATACTTATGCACCTCTTCCCCCTCCTCCTCCTAACCCTCAACCCTAACATCATTACTGGCTTTACTCCCTGTAAACATAGTTTAATCAAAACATTAGATTGTGAATCTAACAATAGAGGCTCG-AAACCTCTTGCTTACCGAGAAAGCC-CACAAGAACTGCTAACTCACTATCCCATGTATGACAACATGGCTTTCTCAACTTTTAAAGGATAACAGCTATCCATTGGTCTTAGGACCCAAAAATTTTGGTGCAACTCCAAATAAAAGTAATAGCAATGTACACCACCATAGCCATTCTAACGCTAACCTCCCTAATTCCCCCCATTACAGCCACCCTTATTAACCCCAATAAAAAGAACTTATACCCGCACTACGTAAAAATGACCATTGCCTCTACCTTTATAATCAGCCTATTTCCCACAATAATATTCATGTGCACAGACCAAGAAACCATTATTTCAAACTGACACTGAACTGCAACCCAAACGCTAGAACTCTCCCTAAGCTT&lt;br /&gt;
Macaca_fuscata		AAGCTTTTCCGGCGCAACCATCCTTATGATCGCTCACGGACTCACCTCTTCCATATATTTCTGCCTAGCCAATTCAAACTATGAACGCACTCACAACCGTACCATACTACTGTCCCGAGGACTTCAAATCCTACTTCCACTAACAGCCTTTTGATGATTAACAGCAAGCCTTACTAACCTTGCCCTACCCCCCACTATCAATCTACTAGGTGAACTCTTTGTAATCGCAACCTCATTCTCCTGATCCCATATCACCATTATGCTAACAGGACTTAACATATTAATTACGGCCCTCTACTCTCTCCACATATTCACTACAACACAACGAGGAACACTCACACATCACATAATCAACATAAAGCCCCCCTTCACACGAGAAAACACATTAATATTCATACACCTCGCTCCAATTATCCTTCTATCCCTCAACCCCAACATCATCCTGGGGTTTACCTCCTGTAGATATAGTTTAACTAAAACACTAGATTGTGAATCTAACCATAGAGACTCA-CCACCTCTTATTTACCGAGAAAACT-CGCAAGGACTGCTAACCCATGTACCCGTACCTAAAATTACGGTTTTCTCAACTTTTAAAGGATAACAGCTATCCATTGACCTTAGGAGTCAAAAACATTGGTGCAACTCCAAATAAAAGTAATAATCATGCACACCCCCATCATTATAACAACCCTTATCTCCCTAACTCTCCCAATTTTTGCCACCCTCATCAACCCTTACAAAAAACGTCCATACCCAGATTACGTAAAAACAACCGTAATATATGCTTTCATCATCAGCCTCCCCTCAACAACTTTATTCATCTTCTCAAACCAAGAAACAACCATTTGGAGCTGACATTGAATAATGACCCAAACACTAGACCTAACGCTAAGCTT&lt;br /&gt;
M_mulatta		AAGCTTTTCTGGCGCAACCATCCTCATGATTGCTCACGGACTCACCTCTTCCATATATTTCTGCCTAGCCAATTCAAACTATGAACGCACTCACAACCGTACCATACTACTGTCCCGGGGACTTCAAATCCTACTTCCACTAACAGCTTTCTGATGATTAACAGCAAGCCTTACTAACCTTGCCCTACCCCCCACTATCAACCTACTAGGTGAACTCTTTGTAATCGCGACCTCATTCTCCTGGTCCCATATCACCATTATATTAACAGGATTTAACATACTAATTACGGCCCTCTACTCCCTCCACATATTCACCACAACACAACGAGGAGCACTCACACATCACATAATCAACATAAAACCCCCCTTCACACGAGAAAACATATTAATATTCATACACCTCGCTCCAATCATCCTCCTATCTCTCAACCCCAACATCATCCTGGGGTTTACTTCCTGTAGATATAGTTTAACTAAAACATTAGATTGTGAATCTAACCATAGAGACTTA-CCACCTCTTATTTACCGAGAAAACT-CGCGAGGACTGCTAACCCATGTATCCGTACCTAAAATTACGGTTTTCTCAACTTTTAAAGGATAACAGCTATCCATTGACCTTAGGAGTCAAAAATATTGGTGCAACTCCAAATAAAAGTAATAATCATGCACACCCCTATCATAATAACAACCCTTATCTCCCTAACTCTCCCAATTTTTGCCACCCTCATCAACCCTTACAAAAAACGTCCATACCCAGATTACGTAAAAACAACCGTAATATATGCTTTCATCATCAGCCTCCCCTCAACAACTTTATTCATCTTCTCAAACCAAGAAACAACCATTTGAAGCTGACATTGAATAATAACCCAAACACTAGACCTAACACTAAGCTT&lt;br /&gt;
M_fascicularis		AAGCTTCTCCGGCGCAACCACCCTTATAATCGCCCACGGGCTCACCTCTTCCATGTATTTCTGCTTGGCCAATTCAAACTATGAGCGCACTCATAACCGTACCATACTACTATCCCGAGGACTTCAAATTCTACTTCCATTGACAGCCTTCTGATGACTCACAGCAAGCCTTACTAACCTTGCCCTACCCCCCACTATTAATCTACTAGGCGAACTCTTTGTAATCACAACTTCATTTTCCTGATCCCATATCACCATTGTGTTAACGGGCCTTAATATACTAATCACAGCCCTCTACTCTCTCCACATGTTCATTACAGTACAACGAGGAACACTCACACACCACATAATCAATATAAAACCCCCCTTCACACGAGAAAACATATTAATATTCATACACCTCGCTCCAATTATCCTTCTATCTCTCAACCCCAACATCATCCTGGGGTTTACCTCCTGTAAATATAGTTTAACTAAAACATTAGATTGTGAATCTAACTATAGAGGCCTA-CCACTTCTTATTTACCGAGAAAACT-CGCAAGGACTGCTAATCCATGCCTCCGTACTTAAAACTACGGTTTCCTCAACTTTTAAAGGATAACAGCTATCCATTGACCTTAGGAGTCAAAAACATTGGTGCAACTCCAAATAAAAGTAATAATCATGCACACCCCCATCATAATAACAACCCTCATCTCCCTGACCCTTCCAATTTTTGCCACCCTCACCAACCCCTATAAAAAACGTTCATACCCAGACTACGTAAAAACAACCGTAATATATGCTTTTATTACCAGTCTCCCCTCAACAACCCTATTCATCCTCTCAAACCAAGAAACAACCATTTGGAGTTGACATTGAATAACAACCCAAACATTAGACCTAACACTAAGCTT&lt;br /&gt;
M_sylvanus		AAGCTTCTCCGGTGCAACTATCCTTATAGTTGCCCATGGACTCACCTCTTCCATATACTTCTGCTTGGCCAACTCAAACTACGAACGCACCCACAGCCGCATCATACTACTATCCCGAGGACTCCAAATCCTACTCCCACTAACAGCCTTCTGATGATTCACAGCAAGCCTTACTAATCTTGCTCTACCCTCCACTATTAATCTACTGGGCGAACTCTTCGTAATCGCAACCTCATTTTCCTGATCCCACATCACCATCATACTAACAGGACTGAACATACTAATTACAGCCCTCTACTCTCTTCACATATTCACCACAACACAACGAGGAGCGCTCACACACCACATAATTAACATAAAACCACCTTTCACACGAGAAAACATATTAATACTCATACACCTCGCTCCAATTATTCTTCTATCTCTTAACCCCAACATCATTCTAGGATTTACTTCCTGTAAATATAGTTTAATTAAAACATTAGACTGTGAATCTAACTATAGAAGCTTA-CCACTTCTTATTTACCGAGAAAACT-TGCAAGGACCGCTAATCCACACCTCCGTACTTAAAACTACGGTTTTCTCAACTTTTAAAGGATAACAGCTATCCATTGGCCTTAGGAGTCAAAAATATTGGTGCAACTCCAAATAAAAGTAATAATCATGTATACCCCCATCATAATAACAACTCTCATCTCCCTAACTCTTCCAATTTTCGCTACCCTTATCAACCCCAACAAAAAACACCTATATCCAAACTACGTAAAAACAGCCGTAATATATGCTTTCATTACCAGCCTCTCTTCAACAACTTTATATATATTCTTAAACCAAGAAACAATCATCTGAAGCTGGCACTGAATAATAACCCAAACACTAAGCCTAACATTAAGCTT&lt;br /&gt;
Saimiri_sciureus	AAGCTTCACCGGCGCAATGATCCTAATAATCGCTCACGGGTTTACTTCGTCTATGCTATTCTGCCTAGCAAACTCAAATTACGAACGAATTCACAGCCGAACAATAACATTTACTCGAGGGCTCCAAACACTATTCCCGCTTATAGGCCTCTGATGACTCCTAGCAAATCTCGCTAACCTCGCCCTACCCACAGCTATTAATCTAGTAGGAGAATTACTCACAATCGTATCTTCCTTCTCTTGATCCAACTTTACTATTATATTCACAGGACTTAATATACTAATTACAGCACTCTACTCACTTCATATGTATGCCTCTACACAGCGAGGTCCACTTACATACAGCACCAGCAATATAAAACCAATATTTACACGAGAAAATACGCTAATATTTATACATATAACACCAATCCTCCTCCTTACCTTGAGCCCCAAGGTAATTATAGGACCCTCACCTTGTAATTATAGTTTAGCTAAAACATTAGATTGTGAATCTAATAATAGAAGAATA-TAACTTCTTAATTACCGAGAAAGTG-CGCAAGAACTGCTAATTCATGCTCCCAAGACTAACAACTTGGCTTCCTCAACTTTTAAAGGATAGTAGTTATCCATTGGTCTTAGGAGCCAAAAACATTGGTGCAACTCCAAATAAAAGTAATA---ATACACTTCTCCATCACTCTAATAACACTAATTAGCCTACTAGCGCCAATCCTAGCTACCCTCATTAACCCTAACAAAAGCACACTATACCCGTACTACGTAAAACTAGCCATCATCTACGCCCTCATTACCAGTACCTTATCTATAATATTCTTTATCCTTACAGGCCAAGAATCAATAATTTCAAACTGACACTGAATAACTATCCAAACCATCAAACTATCCCTAAGCTT&lt;br /&gt;
;&lt;br /&gt;
end;&lt;br /&gt;
&lt;br /&gt;
begin mrbayes; &lt;br /&gt;
    set autoclose=yes nowarn=yes; &lt;br /&gt;
    lset nst=6 rates=gamma; &lt;br /&gt;
    mcmc nruns=1 ngen=10000 samplefreq=10; &lt;br /&gt;
end;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A SLURM Pro batch script must be created to run your job.  The first script below shows a MPI parallel run&lt;br /&gt;
of the above &#039;.nex&#039; input file.  This script selects 4 processors (cores) and allows SLURM to put them on any&lt;br /&gt;
compute node.  Note, that when running any parallel program one must be cognizant of the scaling &lt;br /&gt;
properties of its parallel algorithm; in other words, how much does a given job&#039;s running time drop&lt;br /&gt;
as one doubles the number of processors used.  All parallel programs arrive at point of diminishing returns&lt;br /&gt;
that depend on the algorithm, size of the problem being solved, and the performance features of the &lt;br /&gt;
system that it is being run on.  We might have chosen to run this job on 8, 16, or 32 processors (cores),&lt;br /&gt;
but would only do so if the improvement in performance scales.  Improvements of less than 25% after a&lt;br /&gt;
doubling are an indication of a reasonable maximum number of processors under those particular set&lt;br /&gt;
of circumstances&lt;br /&gt;
&lt;br /&gt;
Here is the 4 processor MPI parallel SLURM batch script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name MRBAYES_mpi&lt;br /&gt;
#SBATCH --nodes=4&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=1920&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SBATCH Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Use &#039;mpirun&#039; and point to the MPI parallel executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin MRBAYES MPI Run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
mpirun -np 4 /share/apps/mrbayes/default/bin/mb ./primates.nex &amp;gt; primates.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   MRBAYES MPI Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This script can be dropped into a file (say &#039;mrbayes_mpi.job) on ANDY and&lt;br /&gt;
run with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub mrbayes_mpi.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This test case should take no more than a couple of minutes to run and will produce SLURM output and&lt;br /&gt;
error files beginning with the job name &#039;MRBAYES_mpi&#039;.  Other MrBayes specific outputs will also be&lt;br /&gt;
produced.  Details on the meaning of the SLURM script are covered above in this Wiki&#039;s SLURM section.  The&lt;br /&gt;
most important lines are &#039;#SBATCH --nodes=4 ntasks=1 mem=1920&#039;.  The &lt;br /&gt;
first instructs SLURM to select 4 resource &#039;chunks&#039; each with 1 processor (core) and 1,920 MBs of memory&lt;br /&gt;
in it for the job (on ANDY as much as 2,880 MBs might have been selected, on PENZIAS this number is 3860mb).  &lt;br /&gt;
The second line instructs SLURM to place this job wherever the least used resources are found (i.e. freely).  &lt;br /&gt;
The master compute node that it finally selects to run your job will be printed in the SLURM output file by the &#039;hostname&#039; command. &lt;br /&gt;
As this is a parallel job, other compute nodes may also be called into service to complete this job.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center also provides a serial version of MrBayes.  A SLURM batch script for running the &lt;br /&gt;
serial version is easy to prepare from the above by making a few changes.  Here is a listing of the&lt;br /&gt;
differences between the above MPI script and the serial script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
3,4c3,4&lt;br /&gt;
&amp;lt; #SBATCH --job-name MRBAYES_mpi&lt;br /&gt;
&amp;lt; #SBATCH --nodes=4&lt;br /&gt;
  #SBATCH --ntasks=1&lt;br /&gt;
  #SBATCH --mem=2880&lt;br /&gt;
---&lt;br /&gt;
&amp;gt; #SBATCH --job-name MRBAYES_serial&lt;br /&gt;
&amp;gt; #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks=1&lt;br /&gt;
16c16&lt;br /&gt;
&amp;lt; echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin MRBAYES MPI Run ...&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
&amp;gt; echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin MRBAYES Serial Run ...&amp;quot;&lt;br /&gt;
18c18&lt;br /&gt;
&amp;lt; mpirun -np 4 /share/apps/mrbayes/default/bin/mb ./primates.nex&lt;br /&gt;
---&lt;br /&gt;
&amp;gt; /share/apps/mrbayes/default/bin/mb-serial ./primates.nex&lt;br /&gt;
20c20&lt;br /&gt;
&amp;lt; echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   MRBAYES MPI Run ...&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
&amp;gt; echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   MRBAYES Serial Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Finally, it is possible to run MrBayes in GPU-accelerated mode on PENZIAS (only).  This is an experimental&lt;br /&gt;
version of the code and users are cautioned to check their results and note their performance to &lt;br /&gt;
be sure they are getting accurate answers in shorter time periods.   Nothing is worse in HPC than&lt;br /&gt;
going in the wrong direction, more slowly (this principle applies to NYC Taxi rides as well).  Here&lt;br /&gt;
is yet another script that will run the GPU-accelerated version of MrBayes (again, on PENZIAS only).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name MRBAYES_gpu&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --gres=gpu:1 &lt;br /&gt;
#SBATCH --accel=kepler&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Point to the GPU parallel executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin MRBAYES GPU Run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
/share/apps/mrbayes/default/bin/mb-gpu ./primates.nex&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   MRBAYES GPU Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are several differences worth pointing out.  First,  he resource request line &#039;-l select&#039; request more that just&lt;br /&gt;
a processor and some memory, but also a GPU (ntasks=1) and a particular flavor of GPU (accel=kepler).&lt;br /&gt;
Second the the name of the executable, is now &#039;mb-gpu&#039;, which selects the GPU-accelerated&lt;br /&gt;
version of the code.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=124</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=124"/>
		<updated>2022-10-27T19:32:41Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:CUNY-HPCC-HEADER-LOGO.jpg]]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the&lt;br /&gt;
campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314.  HPCC&lt;br /&gt;
goals are to: &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Create opportunities for the CUNY research community to develop new partnerships with the government and private sectors; and&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Data Storage and Management System (&#039;&#039;&#039;DSMS&#039;&#039;&#039;) which is mounted only on login node(s) of all servers. Consequently, no jobs can be started directly from &#039;&#039;&#039;DSMS&#039;&#039;&#039; storage.  Instead, all jobs must be submitted  from  a separate (fast but small) &#039;&#039;&#039;/scratch&#039;&#039;&#039; file system mounted on all computational nodes and on all login nodes.  As the name suggests, the &#039;&#039;&#039;/scratch&#039;&#039;&#039; file system is not  home directory  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
                                   [[Image:HPCC_Chart.png]]&lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  The deployed systems include:  distributed memory (also referred to as “cluster”) computers, symmetric multiprocessor (also referred as SMP) and shared memory (also reffred as NUMA machines).  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Computational Systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU. Currently, HPCC operates 3 SMP servers named &#039;&#039;&#039;Math, Cryo &#039;&#039;&#039; and &#039;&#039;&#039;Karle&#039;&#039;&#039;. Karle is a server which does not have GPU and is used for visualizations, visual analytics  and interactive MATLAB/Mathematica jobs. &#039;&#039;&#039;Math&#039;&#039;&#039; is a condominium server without GPU as well. Cryo (CPU+GPU server) is  specialized server designed to support large scale multi-core multi-GPU jobs. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprizing a  set of SMP servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. Each SMP member of the cluster is called a node. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  The main cluster at HPCC is a hybrid (CPU+GPU) cluster called &#039;&#039;&#039;Penzias&#039;&#039;&#039;.  Sixty six (66) of Penzias nodes have 2 x GPU K20m, and the 3 fat nodes (nodes with large number of CPU-cores and memory) of the cluster do not have GPU.   In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; server called &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This server does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN&#039;&#039;&#039;) is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the systems available at the HPC Center.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!System&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores/node &amp;amp; GPU&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Multi-core Processor&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
2xK20m GPU, PCIe&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|Sand Bridge, EP 2.20 GHz&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Haswell, 2.30 GHz&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|Ivy Bridge, 3 GHz&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40 8xV100 (32GB) GPU, XSM&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|Skylake, 2.40 GHz&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Skylake, 2.10 GHz&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
2xV100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|Haswell, 2.30 GHz&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|2&lt;br /&gt;
| colspan=&amp;quot;5&amp;quot; rowspan=&amp;quot;2&amp;quot; |NA&lt;br /&gt;
|-&lt;br /&gt;
|MHN&lt;br /&gt;
|Login Nodes&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Partitions and jobs==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in partitions. The main partition is called production. This is routing partition which distributes the jobs in several sub-partitions depend on job’s requirements. Thus the serial job submitted in &#039;&#039;&#039;production&#039;&#039;&#039; will land in &#039;&#039;&#039;partsequential&#039;&#039;&#039; partition.  No PBS Pro scripts should be ever used and all existing PBS scripts must be converted to SLURM before use. The table below shows the limitations of the partitions.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
|-&lt;br /&gt;
|production&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partedu&lt;br /&gt;
|16&lt;br /&gt;
|2&lt;br /&gt;
|216&lt;br /&gt;
|72 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partcryo&lt;br /&gt;
|40&lt;br /&gt;
|40&lt;br /&gt;
|40&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partmath&lt;br /&gt;
|128&lt;br /&gt;
|128&lt;br /&gt;
|128&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partmatlab&lt;br /&gt;
|1972&lt;br /&gt;
|50&lt;br /&gt;
|1972&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partdev&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|4 Hours&lt;br /&gt;
|}&lt;br /&gt;
o	&#039;&#039;&#039;production&#039;&#039;&#039; is the main partition with assigned resources across all servers (except Math and Cryo).It is routing partition so the actual job(s) will be placed in proper sub-partition automatically. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
 &lt;br /&gt;
o	&#039;&#039;&#039;partedu&#039;&#039;&#039;  partition is only for education. Assigned resources are on educational server Herbert. Partedu is accessible only to students (graduate and/or undergraduate) and their professors who are registered for a class supported by HPCC. Access to this partition is limited by the duration of the class. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partcryo&#039;&#039;&#039; is partition used to start jobs on CRYO server. Users whose projects require and/or benefit from availability  of 8 GPU interconnected via SXM interface (not PCIe) must apply for access to this partition at hpchelp@csi.cuny.edu. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partmatlab&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. Note however that parallel toolbox programs  can be submitted via production partition, but only as thread parallel jobs. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
The second and fourth Tuesday mornings in the month from 8:00AM to 12PM are normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  &amp;lt;br/ &amp;gt;&lt;br /&gt;
Unplanned maintenance to remedy system related problems may be scheduled as needed.  Reasonable attempts will be made to inform users running on those systems when these needs arise.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are encouraged to read this Wiki carefully.  In particular, the sections on compiling and running&lt;br /&gt;
parallel programs, and the section on the SLURM batch queueing system will give you the essential&lt;br /&gt;
knowledge needed to use the CUNY HPC Center systems.  We have strived to maintain the most uniform&lt;br /&gt;
user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  Still, there are some differences, particularly with the SGI (ANDY) and Cray (SALK)&lt;br /&gt;
systems.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly&lt;br /&gt;
schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit&lt;br /&gt;
is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and&lt;br /&gt;
architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the&lt;br /&gt;
CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures&lt;br /&gt;
at formal classes throughout the CUNY campuses.  &lt;br /&gt;
&lt;br /&gt;
Users with further questions or requiring immediate assistance in use of the systems should create a ticket using their HPC account login at:&lt;br /&gt;
&lt;br /&gt;
   [https://hpchelp.csi.cuny.edu hpchelp.csi.cuny.edu]&lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
An old version of the user manual can be downloaded at: http://cunyhpc.csi.cuny.edu/publications/User_Manual.pdf.  Note that this manual provides PBS batch  scripts as examples. Currently CUNY-HPCC uses SLURM so users must  check the brief SLURM manual distributed with hew accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=NAMD&amp;diff=123</id>
		<title>NAMD</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=NAMD&amp;diff=123"/>
		<updated>2022-10-27T19:32:36Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main server for Molecular Dynamics Calculations is PENZIAS which supports both GPU and non GPU versions of NAMD.&lt;br /&gt;
However the MPI only (no GPU support) parallel versions of NAMD  are also installed on SALK and ANDY. &lt;br /&gt;
&lt;br /&gt;
In order to use the code please always load the NAMD module first. &lt;br /&gt;
The following line will load application environment for NAMD on PENZIAS. GPU is not requested in below SLURM script. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load namd&lt;br /&gt;
&amp;lt;/pre&amp;gt;  &lt;br /&gt;
&lt;br /&gt;
A batch submit script for NAMD that runs the CPU-only version on ANDY and PENZIAS using &#039;mpirun&#039; &lt;br /&gt;
on 16 processors, 4 to a compute node, follows. Please note that on PENZIAS the que is called production.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production_qdr&lt;br /&gt;
#SBATCH --job-name NAMD_MPI&lt;br /&gt;
#SBATCH --nodes=4&lt;br /&gt;
#SBATCH --ntasks=4&lt;br /&gt;
#SBATCH --mpiprocs=4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Use &#039;mpirun&#039; and point to MPI parallel executable to run &lt;br /&gt;
&lt;br /&gt;
mpirun -np 16 namd2  ./hivrt.conf &amp;gt; hivrt_mpi.out&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   Non-Threaded NAMD MPI Parall Run ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to use &#039;&#039;&#039;GPU enabled&#039;&#039;&#039; NAMD versions users must use PENZIAS.  First load the proper module file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load namd/2.10&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The similar as above job, but one that uses 4 CPUs for the bonded interactions and an additional 4 GPUs for the non-bonded&lt;br /&gt;
interactions, the following script could be used. Please note that this set up is valid &#039;&#039;&#039;only&#039;&#039;&#039; on PENZIAS since it is the only &lt;br /&gt;
server which has GPU.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name NAMD_GPU&lt;br /&gt;
#SBATCH --nodes=2&lt;br /&gt;
#SBATCH --ntasks=2&lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
#SBATCH --accel=kepler&lt;br /&gt;
#SBATCH --mpiprocs=2&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Use &#039;mpirun&#039; and point to the Non-Threaded MPI parallel executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin Non-Threaded NAMD MPI-GPU Parall Run ...&amp;quot;&lt;br /&gt;
mpirun -np 4 namd2  +idlepoll +devices 0,1,0,1   ./hivrt.conf &amp;gt; hivrt_gpu.out&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End    Non-Threaded NAMD MPI-GPU Parall Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Job submission on SALK via SLURM is somewhat different.  First there is &#039;&#039;&#039;no module file.&#039;&#039;&#039;  The script below shows how to run 16 processor (core) job &lt;br /&gt;
on SALK:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name NAMD_MPI&lt;br /&gt;
#SBATCH --nodes=16&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
#SBATCH --j oe&lt;br /&gt;
#SBATCH --o NAMD_MPI.out&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Use &#039;aprun&#039; and point to the Non-Threaded MPI parallel executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin NAMD Non-Threaded MPI Run ...&amp;quot;&lt;br /&gt;
aprun -n 16 -N 16 -cc cpu /share/apps/namd/default/CRAY-XT-g++/namd2  ./hivrt.conf &amp;gt; hivrt.out&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   NAMD Non-Threaded MPI Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The most important difference to note is that on SALK is that the &#039;mpirun&#039; command is&lt;br /&gt;
replaced with the Cray&#039;s &#039;aprun&#039; command.  The &#039;aprun&#039; command is used to start all&lt;br /&gt;
jobs on SALK&#039;s compute nodes and mediates the interaction between the SLURM script&#039;s&lt;br /&gt;
resource requests and the ALPS resource manager on the Cray.  SALK users should familiarize&lt;br /&gt;
themselves with &#039;aprun&#039; and its options by reading &#039;man aprun&#039; on SALK.  Users cannot&lt;br /&gt;
request more resources on their &#039;aprun&#039; command-lines than are defined by the SLURM&lt;br /&gt;
script&#039;s resource request lines.  There is useful discussion elsewhere on the Wiki about&lt;br /&gt;
the interaction between SLURM and ALPS as mediated by the &#039;aprun&#039; command and the &lt;br /&gt;
error message generated when there is a mismatch.&lt;br /&gt;
&lt;br /&gt;
For any of these jobs to run, all the required auxiliary files must be present in the directory&lt;br /&gt;
from which the job is run.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=RAXML&amp;diff=122</id>
		<title>RAXML</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=RAXML&amp;diff=122"/>
		<updated>2022-10-27T19:32:33Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Examples of running both parallel and serial jobs are presented below. More information can be found here [http://www.exelixis-lab.org]&lt;br /&gt;
&lt;br /&gt;
To run RAxML first a PHYLIP file of aligned DNA or amino-acid sequences similar to the one shown&lt;br /&gt;
here must be created.  This file, &#039;alg.phy&#039;, is in interleaved format: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
5 60&lt;br /&gt;
Tax1        CCATCTCACGGTCGGTACGATACACCTGCTTTTGGCAG&lt;br /&gt;
Tax2        CCATCTCACGGTCAGTAAGATACACCTGCTTTTGGCGG&lt;br /&gt;
Tax3        CCATCTCCCGCTCAGTAAGATACCCCTGCTGTTGGCGG&lt;br /&gt;
Tax4        TCATCTCATGGTCAATAAGATACTCCTGCTTTTGGCGG&lt;br /&gt;
Tax5        CCATCTCACGGTCGGTAAGATACACCTGCTTTTGGCGG&lt;br /&gt;
&lt;br /&gt;
GAAATGGTCAATATTACAAGGT&lt;br /&gt;
GAAATGGTCAACATTAAAAGAT&lt;br /&gt;
GAAATCGTCAATATTAAAAGGT&lt;br /&gt;
GAAATGGTCAATCTTAAAAGGT&lt;br /&gt;
GAAATGGTCAATATTAAAAGGT&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detail about PHYLIP formatted files, please check look at the RAxML manual&lt;br /&gt;
here [http://sco.h-its.org/exelixis/oldPage/RAxML-Manual.7.0.4.pdf] at the web site&lt;br /&gt;
referenced above.  There is also a tutorial here [http://sco.h-its.org/exelixis/hands-On.html]&lt;br /&gt;
&lt;br /&gt;
To include all required environmental variables and the path to the RAXML executable run&lt;br /&gt;
the modules load command (the modules utility is discussed in detail above):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load raxml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next create a SLURM batch script.  Below is an example script that will run the serial version&lt;br /&gt;
of RAxML.  The program options &#039;&#039;&#039;-m&#039;&#039;&#039;,&#039;&#039;&#039;-n&#039;&#039;&#039;,&#039;&#039;&#039;-s&#039;&#039;&#039; are all required.  In order, they specify&lt;br /&gt;
the substitution model (-m), the output file name (-n), and the sequence file name (-s).&lt;br /&gt;
Additional options are discussed in the manual.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name RAXML_serial&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Just point to the serial executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin RAXML Serial Run ...&amp;quot;&lt;br /&gt;
raxmlHPC -y -m GTRCAT -n TEST1 -p 12345 -s alg.phy &amp;gt; raxml_ser.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   RAXML Serial Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This script can be dropped into a file (say raxml_serial.job) and submitted to SLURM with&lt;br /&gt;
the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub raxml_serial.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
RAxML produces the following output files&lt;br /&gt;
:&lt;br /&gt;
:#Parsimony starting tree is written to &#039;&#039;&#039;RAxML_parsimonyTree.TEST1&#039;&#039;&#039;.&lt;br /&gt;
:#Final tree is written to &#039;&#039;&#039;RAxML_result.TEST1&#039;&#039;&#039;.&lt;br /&gt;
:#Execution Log File is written to &#039;&#039;&#039;RAxML_log.TEST1&#039;&#039;&#039;.&lt;br /&gt;
:#Execution information file is written to &#039;&#039;&#039;RAxML_info.TEST1&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
RAxML also is available in a MPI-parallel version called raxmlHPC-MPI.  The MPI-parallelized version&lt;br /&gt;
can be run on all types of clusters to perform rapid parallel bootstraps,  or multiple  inferences on &lt;br /&gt;
the original  alignment. The MPI-version is for executing large production runs (i.e. 100 or 1,000 bootstraps).&lt;br /&gt;
You can also perform multiple inferences on larger datasets in parallel to find a best-known ML tree&lt;br /&gt;
for your dataset.  Finally, the novel rapid BS algorithm and the associated ML search have also been&lt;br /&gt;
parallelized with MPI. &lt;br /&gt;
&lt;br /&gt;
The following MPI script script selects 4 processors (cores) and allows SLURM to put them on any&lt;br /&gt;
compute node.  Note, that when running any parallel program one must be cognizant of the scaling &lt;br /&gt;
properties of its parallel algorithm; in other words, how much does a given job&#039;s run time drop&lt;br /&gt;
as one doubles the number of processors used.  All parallel programs arrive at point of diminishing returns&lt;br /&gt;
that depend on the algorithm, size of the problem being solved, and the performance features of the &lt;br /&gt;
system that it is being run on.  We might have chosen to run this job on 8, 16, or 32 processors (cores),&lt;br /&gt;
but would only do so if the improvement in performance scales.  Improvements of less than 25% after a&lt;br /&gt;
doubling are an indication of a reasonable maximum number of processors under those particular set&lt;br /&gt;
of circumstances.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name RAXML_mpi&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Use &#039;mpirun&#039; and point to the MPI parallel executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin RAXML MPI Run ...&amp;quot;&lt;br /&gt;
mpirun -np 4 -machinefile $SLURM_NODEFILE raxmlHPC-MPI -m GTRCAT -n TEST2 -s alg.phy -N 4 &amp;gt; raxml_mpi.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   RAXML MPI Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This test case should take no more than a minute to run and will produce SLURM output and error&lt;br /&gt;
files beginning with the job name &#039;RAXML_mpi&#039;.  Other RAxML-specific outputs will also be produced&lt;br /&gt;
Details on the meaning of the SLURM script are covered above in this Wiki&#039;s SLURM section.  The most important&lt;br /&gt;
lines are &#039;#SBATCH --nodes=4 ntasks=1 mem=2880&#039;.  The first instructs SLURM&lt;br /&gt;
to select 4 resource &#039;chunks&#039; each with 1 processor (core) and 2,880 MBs of memory in it for the job (on&lt;br /&gt;
ANDY as much as 2,880 MBs might have been selected).  The second line instructs SLURM to place this job&lt;br /&gt;
wherever the least used resources are found (i.e. freely).  &lt;br /&gt;
&lt;br /&gt;
The master compute node that it finally selects to run your job will be printed in the SLURM output file by&lt;br /&gt;
the &#039;hostname&#039; command.  As this is a parallel job, other compute nodes may also be called into service to&lt;br /&gt;
complete this job.  Note that the name of the parallel executable is &#039;raxmlHPC-MPI&#039; and the in the parallel&lt;br /&gt;
run we are complete four simulations (-N 4). The expression &#039;2&amp;gt;&amp;amp;1&#039; combines Unix standard output from the&lt;br /&gt;
program with Unix standard error.  Users should always explicitly specify the name of the application&#039;s output file&lt;br /&gt;
in this way to ensure that it is written directly into the user&#039;s working directory which has much more disk&lt;br /&gt;
space than the SLURM spool directory on /var.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=121</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=121"/>
		<updated>2022-10-27T19:32:33Z</updated>

		<summary type="html">&lt;p&gt;James: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:CUNY-HPCC-HEADER-LOGO.png]]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the&lt;br /&gt;
campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314.  HPCC&lt;br /&gt;
goals are to: &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Create opportunities for the CUNY research community to develop new partnerships with the government and private sectors; and&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Data Storage and Management System (&#039;&#039;&#039;DSMS&#039;&#039;&#039;) which is mounted only on login node(s) of all servers. Consequently, no jobs can be started directly from &#039;&#039;&#039;DSMS&#039;&#039;&#039; storage.  Instead, all jobs must be submitted  from  a separate (fast but small) &#039;&#039;&#039;/scratch&#039;&#039;&#039; file system mounted on all computational nodes and on all login nodes.  As the name suggests, the &#039;&#039;&#039;/scratch&#039;&#039;&#039; file system is not  home directory  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
                                   [[Image:HPCC_Chart.png]]&lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  The deployed systems include:  distributed memory (also referred to as “cluster”) computers, symmetric multiprocessor (also referred as SMP) and shared memory (also reffred as NUMA machines).  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Computational Systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU. Currently, HPCC operates 3 SMP servers named &#039;&#039;&#039;Math, Cryo &#039;&#039;&#039; and &#039;&#039;&#039;Karle&#039;&#039;&#039;. Karle is a server which does not have GPU and is used for visualizations, visual analytics  and interactive MATLAB/Mathematica jobs. &#039;&#039;&#039;Math&#039;&#039;&#039; is a condominium server without GPU as well. Cryo (CPU+GPU server) is  specialized server designed to support large scale multi-core multi-GPU jobs. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprizing a  set of SMP servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. Each SMP member of the cluster is called a node. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  The main cluster at HPCC is a hybrid (CPU+GPU) cluster called &#039;&#039;&#039;Penzias&#039;&#039;&#039;.  Sixty six (66) of Penzias nodes have 2 x GPU K20m, and the 3 fat nodes (nodes with large number of CPU-cores and memory) of the cluster do not have GPU.   In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; server called &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This server does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN&#039;&#039;&#039;) is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the systems available at the HPC Center.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!System&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!Cores/node &amp;amp; GPU&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Multi-core Processor&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
2xK20m GPU, PCIe&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|Sand Bridge, EP 2.20 GHz&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Haswell, 2.30 GHz&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|Ivy Bridge, 3 GHz&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40 8xV100 (32GB) GPU, XSM&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|Skylake, 2.40 GHz&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Skylake, 2.10 GHz&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
2xV100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|Haswell, 2.30 GHz&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|2&lt;br /&gt;
| colspan=&amp;quot;5&amp;quot; rowspan=&amp;quot;2&amp;quot; |NA&lt;br /&gt;
|-&lt;br /&gt;
|MHN&lt;br /&gt;
|Login Nodes&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Partitions and jobs==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in partitions. The main partition is called production. This is routing partition which distributes the jobs in several sub-partitions depend on job’s requirements. Thus the serial job submitted in &#039;&#039;&#039;production&#039;&#039;&#039; will land in &#039;&#039;&#039;partsequential&#039;&#039;&#039; partition.  No PBS Pro scripts should be ever used and all existing PBS scripts must be converted to SLURM before use. The table below shows the limitations of the partitions.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
|-&lt;br /&gt;
|production&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partedu&lt;br /&gt;
|16&lt;br /&gt;
|2&lt;br /&gt;
|216&lt;br /&gt;
|72 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partcryo&lt;br /&gt;
|40&lt;br /&gt;
|40&lt;br /&gt;
|40&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partmath&lt;br /&gt;
|128&lt;br /&gt;
|128&lt;br /&gt;
|128&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partmatlab&lt;br /&gt;
|1972&lt;br /&gt;
|50&lt;br /&gt;
|1972&lt;br /&gt;
|240 Hours&lt;br /&gt;
|-&lt;br /&gt;
|partdev&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|4 Hours&lt;br /&gt;
|}&lt;br /&gt;
o	&#039;&#039;&#039;production&#039;&#039;&#039; is the main partition with assigned resources across all servers (except Math and Cryo).It is routing partition so the actual job(s) will be placed in proper sub-partition automatically. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
 &lt;br /&gt;
o	&#039;&#039;&#039;partedu&#039;&#039;&#039;  partition is only for education. Assigned resources are on educational server Herbert. Partedu is accessible only to students (graduate and/or undergraduate) and their professors who are registered for a class supported by HPCC. Access to this partition is limited by the duration of the class. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partcryo&#039;&#039;&#039; is partition used to start jobs on CRYO server. Users whose projects require and/or benefit from availability  of 8 GPU interconnected via SXM interface (not PCIe) must apply for access to this partition at hpchelp@csi.cuny.edu. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partmatlab&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. Note however that parallel toolbox programs  can be submitted via production partition, but only as thread parallel jobs. &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
The second and fourth Tuesday mornings in the month from 8:00AM to 12PM are normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  &amp;lt;br/ &amp;gt;&lt;br /&gt;
Unplanned maintenance to remedy system related problems may be scheduled as needed.  Reasonable attempts will be made to inform users running on those systems when these needs arise.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are encouraged to read this Wiki carefully.  In particular, the sections on compiling and running&lt;br /&gt;
parallel programs, and the section on the SLURM batch queueing system will give you the essential&lt;br /&gt;
knowledge needed to use the CUNY HPC Center systems.  We have strived to maintain the most uniform&lt;br /&gt;
user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  Still, there are some differences, particularly with the SGI (ANDY) and Cray (SALK)&lt;br /&gt;
systems.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly&lt;br /&gt;
schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit&lt;br /&gt;
is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and&lt;br /&gt;
architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the&lt;br /&gt;
CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures&lt;br /&gt;
at formal classes throughout the CUNY campuses.  &lt;br /&gt;
&lt;br /&gt;
Users with further questions or requiring immediate assistance in use of the systems should create a ticket using their HPC account login at:&lt;br /&gt;
&lt;br /&gt;
   [https://hpchelp.csi.cuny.edu hpchelp.csi.cuny.edu]&lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
An old version of the user manual can be downloaded at: http://cunyhpc.csi.cuny.edu/publications/User_Manual.pdf.  Note that this manual provides PBS batch  scripts as examples. Currently CUNY-HPCC uses SLURM so users must  check the brief SLURM manual distributed with hew accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=WRF&amp;diff=120</id>
		<title>WRF</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=WRF&amp;diff=120"/>
		<updated>2022-10-27T19:32:07Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are two distinct WRF development trees and versions, one for production forecasting and another for research&lt;br /&gt;
and development. NCAR&#039;s experimental, advanced research version, called ARW (Advanced Research WRF) features very&lt;br /&gt;
high resolution and is being used to explore ways of improving the accuracy of hurricane tracking, hurricane intensity,&lt;br /&gt;
and rainfall forecasts, among a host of other meteorological questions.  It is ARW along with its pre- and post-&lt;br /&gt;
processing modules (WPS and WPP), and the MET and GRaDS display tools that are supported here at the CUNY HPC Center.&lt;br /&gt;
ARW is supported on both the the CUNY HPC Center SGI (ANDY) and Cray (SALK). The CUNY HPC Center build includes the NCAR Command Language (NCL)&lt;br /&gt;
tools on both SALK and ANDY.&lt;br /&gt;
&lt;br /&gt;
A complete start-to-finished use of ARW requires a significant number of steps in pre-processing, parallel production modeling,&lt;br /&gt;
and post-processing and display.  There are several alternative paths that can be taken through each stage.  In particular, ARW&lt;br /&gt;
itself offers users the ability to process either real or idealized weather data.  Completing one type of simulation or the other&lt;br /&gt;
requires different steps and even different user-compiled versions of the ARW executable.  To help our users familiarize themselves&lt;br /&gt;
with running ARW at the CUNY HPC Center, the steps required to complete a start-to-finish, real-case forecast are presented below.&lt;br /&gt;
For more complete coverage, the CUNY HPC Center recommends that new users study the detailed description of the ARW package&lt;br /&gt;
and how to use it at the University Corporation for Atmospheric Research (UCAR) website&lt;br /&gt;
here [http://www.mmm.ucar.edu/wrf/OnLineTutorial/Basics/index.html]. &lt;br /&gt;
&lt;br /&gt;
==== WRF Pre-Processing with WPS ====&lt;br /&gt;
The WPS part of the WRF package is responsible for mapping time-equals-zero simulation&lt;br /&gt;
input data onto the simulation domain&#039;s terrain.  This process involves the execution of the&lt;br /&gt;
preprocessing applications geogrid.exe, ungrib.exe, and metgrid.exe.  Each of these applications&lt;br /&gt;
reads its input parameters from the &#039;namelist.wps&#039; input specifications file.  &lt;br /&gt;
&lt;br /&gt;
NOTE: In general, these steps do not take much processing time; however, in some cases they may. &lt;br /&gt;
When users discover that pre-processing steps are running longer than five minutes as interactive&lt;br /&gt;
jobs on the head node of either ANDY or SALK they should be instead run as batch jobs.  HPC Center&lt;br /&gt;
staff may decide to kill such long-running interactive pre-processing steps if they are slowing head node&lt;br /&gt;
performance.&lt;br /&gt;
&lt;br /&gt;
In the example presented here, we will run a weather simulation based on input data provided from January&lt;br /&gt;
of 2000 for the eastern United States.  These steps should work both on ANDY and SALK with minor differences&lt;br /&gt;
as noted.  To begin this example, create a working WPS directory and copy the test case namelist file into it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p $HOME/wrftest/wps&lt;br /&gt;
cd $HOME/wrftest/wps&lt;br /&gt;
cp /share/apps/wrf/default/WPS/namelist.wps .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next, you should edit the &#039;namelist.wps&#039; to point to the sample data made available in the &lt;br /&gt;
WRF installation tree.  This involves making sure that the &#039;geog_data_path&#039; assignment in&lt;br /&gt;
the geogrid section of the namelist file points to the sample data tree. From an editor &lt;br /&gt;
make the following assignment:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
geog_data_path = &#039;/share/apps/wrf/default/WPS_DATA/geog&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once this is completed, you must symbolically link or copy the geogrid data table directory&lt;br /&gt;
to your working directory ($HOME/wrftest/wps here).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ln -sf /share/apps/wrf/default/WPS/geogrid ./geogrid&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now, you can run &#039;geogrid.exe&#039;, the geogrid executable, which defines the simulation domains and&lt;br /&gt;
interpolates the various terrestrial data sets between the model&#039;s grid lines. The global environment&lt;br /&gt;
on ANDY has been set to include the path to all the WRF-related executables including &#039;geogrid.exe&#039;.&lt;br /&gt;
On SALK, you must load the WRF module (&#039;module load wrf&#039;) first to set the environment. The geogrid&lt;br /&gt;
executable is an MPI parallel program which could be run in parallel as part of a SLURM batch script to&lt;br /&gt;
complete the combined WRF preprocessing and execution steps, but often it runs only a short while&lt;br /&gt;
and can be run interactively on ANDY&#039;s head node before submitting a full WRF batch job. &lt;br /&gt;
&lt;br /&gt;
First you will first have to load the WRF module with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load wrf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once this is done from the $HOME/wrftest/wps working directory run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
geogrid.exe &amp;gt; geogrid.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On Salk (Cray system) you will have to run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 aprun -n 1 geogrid.exe &amp;gt; geogrid.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that &#039;geogrid.exe&#039; is an MPI program and can be run in parallel.  Long running WRF pre-processing&lt;br /&gt;
jobs should be run either with more cores per node interactively as above (with -n 8, or -n 16) or as&lt;br /&gt;
complete SLURM batch jobs, so that SALK&#039;s interactive nodes are not held by long running jobs.&lt;br /&gt;
&lt;br /&gt;
Two domain files should be produced (geo_em.d01.nc  geo_em.d02.nc) for this basic test case,&lt;br /&gt;
as well as a log and output file which indicates success at the end with:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!&lt;br /&gt;
!  Successful completion of geogrid.        !&lt;br /&gt;
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The next required preprocessing step is to run &#039;ungrib.exe&#039;, the ungrib executable. The purpose of&lt;br /&gt;
ungrib is to unpack &#039;GRIB&#039; (&#039;GRIB1&#039; and &#039;GRIB2&#039;) meteorological data and pack it into an intermediate&lt;br /&gt;
file format usable by &#039;metgrid.exe&#039; in the final preprocessing step.&lt;br /&gt;
&lt;br /&gt;
The data for the January 2000 simulation being documented here has already been downloaded&lt;br /&gt;
and placed in the WRF installation tree in /share/apps/wrf/default/WPS_DATA.  Before running &#039;ungrib.exe&#039;,&lt;br /&gt;
the WRF installation &#039;Vtable&#039; file must first be symbolically linked into the working directory with:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ln -sf /share/apps/wrf/default/WPS/ungrib/Variable_Tables/Vtable.AWIP Vtable&lt;br /&gt;
$ls&lt;br /&gt;
geo_em.d01.nc  geo_em.d02.nc  geogrid  geogrid.log  namelist.wps  Vtable&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Vtable file specifies which fields to unpack from the GRIB files. The Vtables list the fields and their&lt;br /&gt;
GRIB codes that must be unpacked. For this test case the required Vtable file has already been defined,&lt;br /&gt;
but users may have to construct a custom Vtable file for their data.  &lt;br /&gt;
&lt;br /&gt;
Next, the GRIB files themselves must also be symbolically linked into the working directory.  WRF&lt;br /&gt;
provides a script to do this.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$link_grib.csh /share/apps/wrf/default/WPS_DATA/JAN00/2000012&lt;br /&gt;
$ls&lt;br /&gt;
geo_em.d01.nc  geogrid      GRIBFILE.AAA  GRIBFILE.AAC  GRIBFILE.AAE  GRIBFILE.AAG  GRIBFILE.AAI  GRIBFILE.AAK  GRIBFILE.AAM  namelist.wps&lt;br /&gt;
geo_em.d02.nc  geogrid.log  GRIBFILE.AAB  GRIBFILE.AAD  GRIBFILE.AAF  GRIBFILE.AAH  GRIBFILE.AAJ  GRIBFILE.AAL  GRIBFILE.AAN  Vtable&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note &#039;ls&#039; shows that the &#039;GRIB&#039; files are now present.  &lt;br /&gt;
&lt;br /&gt;
Next, more edits to the &#039;namelist.wps&#039; file are required--one to set the start and end dates&lt;br /&gt;
for the simulation to our January 2000 time frame, and the second to set the number of&lt;br /&gt;
simulation seconds to complete (21600 / 3600 = 6.0 hours in this case). Edit the &#039;namelist.wps&#039;&lt;br /&gt;
file by setting the following in the shared section of the file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 start_date = &#039;2000-01-24_12:00:00&#039;,&#039;2000-01-24_12:00:00&#039;,&lt;br /&gt;
 end_date   = &#039;2000-01-25_12:00:00&#039;,&#039;2000-01-25_12:00:00&#039;,&lt;br /&gt;
interval_seconds = 21600&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can run &#039;ungrib.exe&#039; to create the intermediate files required by &#039;metgrid.exe&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ungrib.exe &amp;gt; ungrib.out&lt;br /&gt;
$ls&lt;br /&gt;
FILE:2000-01-24_12  FILE:2000-01-25_06  geo_em.d02.nc  GRIBFILE.AAA  GRIBFILE.AAD  GRIBFILE.AAG  GRIBFILE.AAJ  GRIBFILE.AAM  ungrib.log&lt;br /&gt;
FILE:2000-01-24_18  FILE:2000-01-25_12  geogrid        GRIBFILE.AAB  GRIBFILE.AAE  GRIBFILE.AAH  GRIBFILE.AAK  GRIBFILE.AAN  ungrib.out&lt;br /&gt;
FILE:2000-01-25_00  geo_em.d01.nc       geogrid.log    GRIBFILE.AAC  GRIBFILE.AAF  GRIBFILE.AAI  GRIBFILE.AAL  namelist.wps  Vtable&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that &#039;ungrib.exe&#039;, unlike the other pre-processing tools mentioned here, is NOT an MPI&lt;br /&gt;
parallel program and for larger WRF jobs can run for a fairly long time.  Long running &#039;ungrib.exe&#039; pre-processing jobs should be run as complete SLURM batch jobs, so that SALK&#039;s interactive nodes are not held for hours at a time.&lt;br /&gt;
&lt;br /&gt;
After a successful &#039;ungrib.exe&#039; run you should get the familiar message at the end of the output file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!&lt;br /&gt;
! Successful completion of ungrib.!&lt;br /&gt;
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Like geogrid, the metgrid executable, &#039;metgrid.exe&#039; needs to be able to find its table&lt;br /&gt;
directory in the preprocessing working directory.   The metgrid table directory may&lt;br /&gt;
either be copied or symbolically linked into the working directory location.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ln -sf /share/apps/wrf/default/WPS/metgrid ./metgrid&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Finally, all the files required for a successful run of &#039;metgrid.exe&#039; have been provided.  Like &#039;geogrid.exe&#039;, &#039;metgrid.exe&#039;&lt;br /&gt;
is an MPI parallel program that could be run in SLURM batch mode, but often runs for only a short time and can be run on&lt;br /&gt;
ANDY&#039;s head node, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$metgrid.exe &amp;gt; metgrid.out &lt;br /&gt;
$ls&lt;br /&gt;
FILE:2000-01-24_12  geogrid       GRIBFILE.AAF  GRIBFILE.AAM                       met_em.d02.2000-01-24_12:00:00.nc  metgrid.out&lt;br /&gt;
FILE:2000-01-24_18  geogrid.log   GRIBFILE.AAG  GRIBFILE.AAN                       met_em.d02.2000-01-24_18:00:00.nc  namelist.wps&lt;br /&gt;
FILE:2000-01-25_00  GRIBFILE.AAA  GRIBFILE.AAH  met_em.d01.2000-01-24_12:00:00.nc  met_em.d02.2000-01-25_00:00:00.nc  ungrib.log&lt;br /&gt;
FILE:2000-01-25_06  GRIBFILE.AAB  GRIBFILE.AAI  met_em.d01.2000-01-24_18:00:00.nc  met_em.d02.2000-01-25_06:00:00.nc  ungrib.out&lt;br /&gt;
FILE:2000-01-25_12  GRIBFILE.AAC  GRIBFILE.AAJ  met_em.d01.2000-01-25_00:00:00.nc  met_em.d02.2000-01-25_12:00:00.nc  Vtable&lt;br /&gt;
geo_em.d01.nc       GRIBFILE.AAD  GRIBFILE.AAK  met_em.d01.2000-01-25_06:00:00.nc  metgrid&lt;br /&gt;
geo_em.d02.nc       GRIBFILE.AAE  GRIBFILE.AAL  met_em.d01.2000-01-25_12:00:00.nc  metgrid.log&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are on SALK (Cray XE6), you will have to run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 aprun -n 1 metgrid.exe &amp;gt; metgrid.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that &#039;metgrid.exe&#039; is an MPI program and can be run in parallel.  Long running WRF pre-processing&lt;br /&gt;
jobs should be run either with more cores per node interactively as above (with -n 8, or -n 16) or as&lt;br /&gt;
complete SLURM batch jobs, so that SALK&#039;s interactive nodes are not held by long running jobs.&lt;br /&gt;
&lt;br /&gt;
Successful runs will produce an output file that includes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!&lt;br /&gt;
!  Successful completion of metgrid.  !&lt;br /&gt;
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the met files required by WRF are now present (see the &#039;ls&#039; output above).  At this point,&lt;br /&gt;
the preprocessing phase of this WRF sample run is complete.  We can move on to actually running&lt;br /&gt;
this real (not ideal) WRF test case using the SLURM Pro batch scheduler in MPI parallel mode.&lt;br /&gt;
&lt;br /&gt;
==== Running a WRF Real Case in Parallel Using SLURM ====&lt;br /&gt;
Our frame of reference now turns to running &#039;real.exe&#039; and &#039;wrf.exe&#039; in parallel on ANDY&lt;br /&gt;
or SALK via SLURM Pro.  As you perhaps noticed in walking through the preprocessing steps&lt;br /&gt;
above, the preprocessing files are all installed in their own subdirectory (WPS) under&lt;br /&gt;
the WRF installation tree root (/share/apps/wrf/default).  The same is true for the&lt;br /&gt;
files to run WRF.  They reside under the WRF install root in the &#039;WRFV3&#039; subdirectory.&lt;br /&gt;
&lt;br /&gt;
Within this &#039;WRFV3&#039; directory, the &#039;run&#039; subdirectory contains the all common files needed&lt;br /&gt;
for a &#039;wrf.exe&#039; run except the &#039;met&#039; files that were just created in the preprocessing&lt;br /&gt;
section above and those that are produced by &#039;real.exe&#039; which is run before &#039;wrf.exe&#039;&lt;br /&gt;
in real-data weather forecasts.&lt;br /&gt;
&lt;br /&gt;
Note that the ARW version of WRF allows one to produce a number of different&lt;br /&gt;
executables depending on the type of run that is needed.  Here, we are relying&lt;br /&gt;
on the fact that the &#039;em_real&#039; version of the code has already been built.  Currently,&lt;br /&gt;
the CUNY HPC Center has only compiled this version of WRF.  Other versions can be&lt;br /&gt;
compiled upon request.  The subdirectory &#039;test&#039; underneath the &#039;WRFV3&#039; directory contains&lt;br /&gt;
additional subdirectories for each type of WRF build (em_real, em_fire, em_hill2d_x, etc.).  &lt;br /&gt;
&lt;br /&gt;
To complete an MPI parallel run of this WRF real data case, a &#039;wrfv3/run&#039; working&lt;br /&gt;
directory for your run should be created, and it must be filled with the required &lt;br /&gt;
files from the installation root&#039;s &#039;run&#039; directory, as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$cd $HOME/wrftest&lt;br /&gt;
$mkdir -p wrfv3/run&lt;br /&gt;
$cd wrfv3/run&lt;br /&gt;
$cp /share/apps/wrf/default/WRFV3/run/* .&lt;br /&gt;
$rm *.exe&lt;br /&gt;
$&lt;br /&gt;
$ls&lt;br /&gt;
CAM_ABS_DATA       ETAMPNEW_DATA.expanded_rain      LANDUSE.TBL            ozone_lat.formatted   RRTM_DATA_DBL      SOILPARM.TBL  URBPARM_UZE.TBL&lt;br /&gt;
CAM_AEROPT_DATA    ETAMPNEW_DATA.expanded_rain_DBL  MPTABLE.TBL            ozone_plev.formatted  RRTMG_LW_DATA      tr49t67       VEGPARM.TBL&lt;br /&gt;
co2_trans          GENPARM.TBL                      namelist.input         README.namelist       RRTMG_LW_DATA_DBL  tr49t85&lt;br /&gt;
ETAMPNEW_DATA      grib2map.tbl                     namelist.input.backup  README.tslist         RRTMG_SW_DATA      tr67t85&lt;br /&gt;
ETAMPNEW_DATA_DBL  gribmap.txt                      ozone.formatted        RRTM_DATA             RRTMG_SW_DATA_DBL  URBPARM.TBL&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that the &#039;*.exe&#039; files were removed in the sequence above after the copy because&lt;br /&gt;
they are already pointed to by ANDY&#039;s and SALK&#039;s system PATH variable.&lt;br /&gt;
&lt;br /&gt;
Next, the &#039;met&#039; files produced during the preprocessing phase above need to be copied&lt;br /&gt;
or symbolically linked into the &#039;wrv3/run&#039; directory.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&lt;br /&gt;
$pwd&lt;br /&gt;
/home/guest/wrftest/wrfv3/run&lt;br /&gt;
$&lt;br /&gt;
$cp ../../wps/met_em* .&lt;br /&gt;
$ls&lt;br /&gt;
CAM_ABS_DATA                     grib2map.tbl                       namelist.input         RRTM_DATA_DBL      tr67t85&lt;br /&gt;
CAM_AEROPT_DATA                  gribmap.txt                        namelist.input.backup  RRTMG_LW_DATA      URBPARM.TBL&lt;br /&gt;
co2_trans                        LANDUSE.TBL                        ozone.formatted        RRTMG_LW_DATA_DBL  URBPARM_UZE.TBL&lt;br /&gt;
ETAMPNEW_DATA                    met_em.d01.2000-01-24_12:00:00.nc  ozone_lat.formatted    RRTMG_SW_DATA      VEGPARM.TBL&lt;br /&gt;
ETAMPNEW_DATA_DBL                met_em.d01.2000-01-25_12:00:00.nc  ozone_plev.formatted   RRTMG_SW_DATA_DBL&lt;br /&gt;
ETAMPNEW_DATA.expanded_rain      met_em.d02.2000-01-24_12:00:00.nc  README.namelist        SOILPARM.TBL&lt;br /&gt;
ETAMPNEW_DATA.expanded_rain_DBL  met_em.d02.2000-01-25_12:00:00.nc  README.tslist          tr49t67&lt;br /&gt;
GENPARM.TBL                      MPTABLE.TBL                        RRTM_DATA              tr49t85&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The user may have edits to complete on the WRF &#039;namelist.input&#039; file listed to craft the&lt;br /&gt;
exact job they wish to run.  The default namelist file copied into our working directory&lt;br /&gt;
is in large part what is needed for this test run, but we will reduce the total simulation time&lt;br /&gt;
(for the weather model, not the job) from from 12 to 1 hour by setting the &#039;run_hours&#039;&lt;br /&gt;
variable to 1.&lt;br /&gt;
&lt;br /&gt;
At this point we are ready to submit a SLURM job.  The SLURM Pro batch script below first runs &#039;real.exe&#039; &lt;br /&gt;
which creates the WRF input files &#039;wrfbdy_d01&#039; and  &#039;wrfinput_d01&#039;, and then runs &#039;wrf.exe&#039; itself.&lt;br /&gt;
Both executables are MPI parallel programs, and here they are both run on 16 processors.  Here is&lt;br /&gt;
the &#039;wrftest.job&#039; SLURM script that will run on ANDY:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production_gdr&lt;br /&gt;
#SBATCH --job-name wrf_realem &lt;br /&gt;
#SBATCH --nodes=16&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Find out the contents of the SLURM node file which names the node&lt;br /&gt;
# allocated by SLURM&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Node file contains: &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
cat $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Just point to the pre-processing executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Runnning REAL.exe executable ...&amp;quot;&lt;br /&gt;
mpirun -np 16 /share/apps/wrf/default/WRFV3/run/real.exe&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Running WRF.exe executable ...&amp;quot;&lt;br /&gt;
mpirun -np 16 /share/apps/wrf/default/WRFV3/run/wrf.exe&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Finished WRF test run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The full path to each executable is used for illustrative purposes, but both binaries (real.exe&lt;br /&gt;
and wrf.exe) are in the WRF install tree run directory and would be picked up from the system &lt;br /&gt;
PATH environmental variable without the full path.  This job requests 16 resource chunks,&lt;br /&gt;
each with 1 processor and 2880 MBytes of memory.  This job asks to be run on the QDR&lt;br /&gt;
InfiniBand (faster interconnect) side of the ANDY system.   Details on the use and meaning&lt;br /&gt;
of the SLURM option section of the job are available elsewhere in the CUNY HPC Wiki.&lt;br /&gt;
&lt;br /&gt;
To submit the job type:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub wrftest.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A slightly difference version of the script is required to run the same job on SALK&lt;br /&gt;
(the Cray):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name wrf_realem &lt;br /&gt;
#SBATCH --nodes=16&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --o wrf_test16_O1.out&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Find out the contents of the SLURM node file which names the node&lt;br /&gt;
# allocated by SLURM&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Node file contains: &amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
cat $SLURM_NODEFILE&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Tune some MPICH parameters on the Cray&lt;br /&gt;
export MALLOC_MMAP_MAX=0&lt;br /&gt;
export MALLOC_TRIM_THRESHOLD=536870912&lt;br /&gt;
export MPICH_RANK_ORDER 3&lt;br /&gt;
&lt;br /&gt;
# Just point to the pre-processing executable to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Runnning REAL.exe executable ...&amp;quot;&lt;br /&gt;
aprun -n 16  /share/apps/wrf/default/WRFV3/run/real.exe&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Running WRF.exe executable ...&amp;quot;&lt;br /&gt;
aprun -n 16  /share/apps/wrf/default/WRFV3/run/wrf.exe&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Finished WRF test run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A successful run on either ANDY or SALK will produce an &#039;rsl.out&#039; and &#039;rsl.error&#039; file for&lt;br /&gt;
each processor on which the job ran.  So for this test case there will be 16 of each such files.&lt;br /&gt;
The &#039;rsl.out&#039; files reflect the run settings requested in the namelist file and then time-stamp&lt;br /&gt;
the progress the job is making until the total simulation time is completed.  The tail&lt;br /&gt;
end of an &#039;rsl.out&#039; file for a successful run should look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
:&lt;br /&gt;
:&lt;br /&gt;
v&lt;br /&gt;
Timing for main: time 2000-01-24_12:45:00 on domain   1:    0.06060 elapsed seconds.&lt;br /&gt;
Timing for main: time 2000-01-24_12:48:00 on domain   1:    0.06300 elapsed seconds.&lt;br /&gt;
Timing for main: time 2000-01-24_12:51:00 on domain   1:    0.06090 elapsed seconds.&lt;br /&gt;
Timing for main: time 2000-01-24_12:54:00 on domain   1:    0.06340 elapsed seconds.&lt;br /&gt;
Timing for main: time 2000-01-24_12:57:00 on domain   1:    0.06120 elapsed seconds.&lt;br /&gt;
Timing for main: time 2000-01-24_13:00:00 on domain   1:    0.06330 elapsed seconds.&lt;br /&gt;
 d01 2000-01-24_13:00:00 wrf: SUCCESS COMPLETE WRF&lt;br /&gt;
taskid: 0 hostname: gpute-2&lt;br /&gt;
taskid: 0 hostname: gpute-2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Post-Processing and Displaying WRF Results ====&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=LSDYNA&amp;diff=118</id>
		<title>LSDYNA</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=LSDYNA&amp;diff=118"/>
		<updated>2022-10-27T19:31:36Z</updated>

		<summary type="html">&lt;p&gt;James: Text replacement - &amp;quot;[pP][bB][sS]&amp;quot; to &amp;quot;SLURM&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Both 32-bit and 64-bit executables in both serial and parallel versions are provided.  The MPI&lt;br /&gt;
parallel versions use OpenMPI as their MPI parallel library, the HPC Center&#039;s default version of&lt;br /&gt;
MPI.  The serial executable can also be run in OpenMP (not to be confused with OpenMPI) node-&lt;br /&gt;
local SMP-parallel mode.  The names of the executable files in the &#039;/share/apps/lsdyna/default/bin&#039;&lt;br /&gt;
directory are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ls-dyna_32.exe  ls-dyna_64.exe  ls-dyna-mpp32.exe  ls-dyna_mpp64.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Those with the string &#039;mpp&#039; in the name are the MPI distributed parallel versions of the code. The&lt;br /&gt;
integer (32 or 64) designates the precision of the build.  In the examples below, depending on the&lt;br /&gt;
type of script being submitted (serial or parallel, 32- or 64-bit), a different executable will be &lt;br /&gt;
chosen.   The scaling properties of LS-DYNA in parallel mode are limited, and users should not&lt;br /&gt;
carelessly submit parallel jobs requesting large numbers of cores without understanding how&lt;br /&gt;
their job will scale.  A large 128 core job that runs only 5% faster than a 64 core job is a waste&lt;br /&gt;
of resources.  Please examine the scaling properties of your particular job before scaling up.&lt;br /&gt;
&lt;br /&gt;
As is the case with most long running applications run at the CUNY HPC Center, whether parallel&lt;br /&gt;
or serial, LS-DYNA jobs are run using a SLURM batch job submission script.  Here we provide some&lt;br /&gt;
example scripts for both serial and parallel execution.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that before using this script you will need to setup the environment for LS-DYNA. On Andy &lt;br /&gt;
&amp;quot;[[Main_Page#Modules.2C_Managing_Your_CUNY_HPC_Center_Environment|modules]]&amp;quot; is used to manage environments. Setting up LS-DYNA is done with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load ls-dyna&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
First an example serial execution script&lt;br /&gt;
(called say &#039;airbag.job&#039;) run at 64-bits using the LS-DYNA &#039;airbag&#039; example (&#039;airbag.deplo.k&#039;) from&lt;br /&gt;
the examples directory above as the input.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production_qdr&lt;br /&gt;
#SBATCH --job-name ls-dyna_serial&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Point to the execution directory to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin LS-DYNA Serial Run ...&amp;quot;&lt;br /&gt;
ls-dyna_64.exe i=airbag.deploy.k memory=2000m&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   LS-DYNA Serial Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Details on the SLURM options at the head of the this script file are discussion below,&lt;br /&gt;
but in summary &#039;-q production_qdr&#039;  selects the routing queue into which the job will&lt;br /&gt;
be placed, &#039;-N ls-dyna_serial&#039; sets this job&#039;s name, &#039;nodes=1 ntasks=1 mem=2880&#039;&lt;br /&gt;
nodes-requests 1 SLURM resource chunk that includes 1 cpu and 2880 MBytes of memory, &lt;br /&gt;
allows SLURM to put the resources needed for the job any where on the&lt;br /&gt;
ANDY system.&lt;br /&gt;
&lt;br /&gt;
The LS-DYNA command line sets the input file to be used and the amount of in-core&lt;br /&gt;
memory that is available to the job.  Note that this executable does NOT include the&lt;br /&gt;
string &#039;mpp&#039; which means that it is not the MPI executable.  Users can copy the &#039;airbag.deploy.k&#039;&lt;br /&gt;
file from the examples directory and cut-and-paste this script to run this job.  It &lt;br /&gt;
takes a relatively sort time to run.  The SLURM command for submitting the job would be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub airbag.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is a SLURM script that runs a 16 processor (core) MPI job.  This script is&lt;br /&gt;
set to run the TopCrunch &#039;3cars&#039; benchmark which is relatively long-running using&lt;br /&gt;
MPI on 16 processors.  There are a few important differences in this script.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production_qdr&lt;br /&gt;
#SBATCH --job name ls-dyna_mpi&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2880&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SLURM&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
# Point to the execution directory to run&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin LS-DYNA MPI Parallel Run ...&amp;quot;&lt;br /&gt;
mpirun -np 16 ls-dyna_mpp64.exe i=3cars_shell2_150ms.k ntasks=16 memory=2000m&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   LS-DYNA MPI Parallel Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Focusing on the difference in this script relative to the serial SLURM script above.  First, the &#039;-l select&#039; line&lt;br /&gt;
requests not 1 SLURM resource chunk, but 16 each with 1 cpu (core) and 2880 MBytes of memory.  This&lt;br /&gt;
provides the necessary resources to run our 16 processor MPI-parallel job.   Next, the LS-DYNA command&lt;br /&gt;
line is different.  The LS-DYNA MPI-parallel executable is used (ls-dyna_mpp64.exe), and it is run with &lt;br /&gt;
the help of the OpenMPI job submission command &#039;mpirun&#039; which sets the number of processors&lt;br /&gt;
and the location of those processors on the system.  The actually LS-DYNA key words also add the&lt;br /&gt;
string &#039;ncpu=16&#039;  to instruct LS-DYNA that this is to be a parallel run.&lt;br /&gt;
&lt;br /&gt;
Running in parallel on 16 cores in 64-bit mode on ANDY, the &#039;3cars&#039; case takes about 9181 seconds&lt;br /&gt;
of elapsed time to complete.   If the user would like to run this job, they can grab the input files out&lt;br /&gt;
of the directory &#039;/share/apps/lsdyna/default/examples/3cars&#039; on ANDY and use the above script.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
</feed>