<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.csi.cuny.edu/cunyhpc/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Alex</id>
	<title>HPCC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.csi.cuny.edu/cunyhpc/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Alex"/>
	<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php/Special:Contributions/Alex"/>
	<updated>2026-04-24T05:39:37Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.38.4</generator>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=1002</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=1002"/>
		<updated>2026-04-19T16:36:27Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Condo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) serves as a pivotal research and educational hub for the university. Situated on the campus of the College of Staten Island, located at 2800 Victory Boulevard, Staten Island, New York 10314, the center’s primary objective is to enhance educational opportunities and foster scientific research and discovery within the university. This is achieved through the management of state-of-the-art computing infrastructure and the provision of comprehensive research support services. Notably, CUNY-HPCC offers domain-specific expertise in various aspects of computationally intensive research. Furthermore, CUNY’s membership in the Empire AI (EAI) consortium positions CUNY-HPCC as a stepping stone for CUNY researchers seeking access to EAI advanced facilities.   &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI Consortium comprises the &#039;&#039;&#039;CUNY Graduate Center, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.&#039;&#039;&#039; CUNY-HPCC provides support and maintains tickets for all CUNY users with allocation on EAI. Additionally, CUNY-HPCC serves as a stepping stone for CUNY researchers as it operates (on a smaller scale) architectures (including nodes with Hopper) similar to EAI, including extended “Alpha” servers and new “Beta” computers. The latter will consist of 288 B200 GPUs and recently added RTX 6000 Pro nodes. The anticipated cost for EAI is $0.50 per unit (SU), which will provide CUNY PIs with a rate that is significantly lower than a typical AWS rate. One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU. In comparison, the CUNY-HPCC recovery costs for public servers are $0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units). For further details, please refer to the section on HPCC access plans.     &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
     &lt;br /&gt;
CUNY-HPCC offers a professionally maintained, modern computational environment and architectures, along with advanced storage and fast interconnects. CUNY-HPCC serves the following purposes:     &lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Supports research computing at CUNY, benefiting faculty, their collaborators at other universities, and their public and private sector partners. It also supports CUNY students and research staff.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Provides state-of-the-art computing resources and comprehensive research support services, including expertise and full support for users with allocation on EMPIRE-AI.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Creates opportunities for the CUNY research community to establish new partnerships with the government and private sectors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Utilizes HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Maintains tickets for all CUNY users with allocation on EAI. &lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==Computing architectures at HPCC==&lt;br /&gt;
&lt;br /&gt;
The HPC Center employs a diverse range of architectures to accommodate intricate and demanding workflows. All computational resources of various types are consolidated into a single hybrid cluster known as Arrow. This cluster comprises symmetric multiprocessor (SMP) nodes equipped with and without GPUs, distributed shared memory (NUMA) node(s), high-memory nodes, and advanced SMP nodes featuring multiple GPUs. The number of GPUs per node varies between two and eight, along with the utilized GPU interface and GPU family. Consequently, the fundamental GPU nodes are equipped with two Tesla K20m GPUs connected via the PCIe interface, while the most advanced nodes support eight Ampere A100 GPUs connected via the SXM interface.    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
CUNY-HPCC, a not-for-profit core research facility affiliated with CUNY, is dedicated to supporting a wide range of research endeavors that necessitate advanced computational resources. &amp;lt;u&amp;gt;Notably, CUNY-HPCC’s operations are not directly or indirectly funded by CUNY or the College of Staten Island (CSI). Consequently, CUNY-HPCC employs a cost recovery model that exclusively recoups operational expenses, without generating any profit for the HPCC.&amp;lt;/u&amp;gt; The recovered costs are meticulously calculated using comprehensive documentation of actual operational expenditures and are designed to achieve a break-even point for all CUNY users. This methodology is approved by CUNY-RF and is employed in other CUNY research facilities. The cost recovery charging schema is based on unit-hour usage, encompassing both CPU and GPU units. Definitions for these units are provided in the accompanying table. &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
==== Minimal Access Plan (MAP) ====&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
==== Compute on demand (CODP) ====&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
==== Lease a public node(s) ====&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year.   &lt;br /&gt;
&lt;br /&gt;
==== Lease on condo node(s) ====&lt;br /&gt;
Users may lease condo nodes for a project, ensuring them 100% access and eliminating any time or job limitations on the leased resource. The minimum lease duration &#039;&#039;&#039;is 30 days (one month).&#039;&#039;&#039; Longer leases (more than 90 days) receive a 10% discount. The lease of a condo node is contingent upon the owner of the required node agreeing to lease for a specific time period and duration. Users interested in leasing a condo node of a particular type must contact the HPCC director for options. Lease fees vary depending on the type of node and are currently (note that prices are reviewed once every six months) between $230 and $1100 per month.   &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans details and examples  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Minimum Access is designed to provide extensive support for research activities across various colleges, foster collaboration between institutions, facilitate the establishment of new research projects, and serve as a testing ground for innovative studies. MAP accounts operate under a stringent fair share policy, which determines the actual waiting time for job allocation in a queue based on the resources utilized by that account in previous cycles. Furthermore, all jobs are subject to strict time constraints. Consequently, extended jobs necessitate the implementation of checkpoints.&lt;br /&gt;
&lt;br /&gt;
The MAP offers three tiers of access:&lt;br /&gt;
&lt;br /&gt;
· A: The Basic tier incurs a yearly fee of $5,000. It is tailored to support users from colleges with limited research activities. The fee covers infrastructure expenses associated with one to two users from these colleges.&lt;br /&gt;
&lt;br /&gt;
· B: The Medium tier incurs a yearly fee of $15,000. It covers infrastructure expenses for up to twelve users from these colleges. Additionally, every account under the Medium tier receives complimentary 11,520 CPU hours and 1,440 GPU hours upon account creation.&lt;br /&gt;
&lt;br /&gt;
· C: The Advanced tier incurs a yearly fee of $25,000. It covers infrastructure expenses for all users from these colleges. Furthermore, every new account from this tier receives complimentary 11,520 CPU hours and 1,440 GPU hours upon account creation.  Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points. &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
= Partitions and jobs =&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
= Hours of Operation =&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
= User Support =&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
= Warnings and modes of operation =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
= User Manual =&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=1001</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=1001"/>
		<updated>2026-04-19T16:35:16Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Lease on condo node(s) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) serves as a pivotal research and educational hub for the university. Situated on the campus of the College of Staten Island, located at 2800 Victory Boulevard, Staten Island, New York 10314, the center’s primary objective is to enhance educational opportunities and foster scientific research and discovery within the university. This is achieved through the management of state-of-the-art computing infrastructure and the provision of comprehensive research support services. Notably, CUNY-HPCC offers domain-specific expertise in various aspects of computationally intensive research. Furthermore, CUNY’s membership in the Empire AI (EAI) consortium positions CUNY-HPCC as a stepping stone for CUNY researchers seeking access to EAI advanced facilities.   &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI Consortium comprises the &#039;&#039;&#039;CUNY Graduate Center, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.&#039;&#039;&#039; CUNY-HPCC provides support and maintains tickets for all CUNY users with allocation on EAI. Additionally, CUNY-HPCC serves as a stepping stone for CUNY researchers as it operates (on a smaller scale) architectures (including nodes with Hopper) similar to EAI, including extended “Alpha” servers and new “Beta” computers. The latter will consist of 288 B200 GPUs and recently added RTX 6000 Pro nodes. The anticipated cost for EAI is $0.50 per unit (SU), which will provide CUNY PIs with a rate that is significantly lower than a typical AWS rate. One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU. In comparison, the CUNY-HPCC recovery costs for public servers are $0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units). For further details, please refer to the section on HPCC access plans.     &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
     &lt;br /&gt;
CUNY-HPCC offers a professionally maintained, modern computational environment and architectures, along with advanced storage and fast interconnects. CUNY-HPCC serves the following purposes:     &lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Supports research computing at CUNY, benefiting faculty, their collaborators at other universities, and their public and private sector partners. It also supports CUNY students and research staff.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Provides state-of-the-art computing resources and comprehensive research support services, including expertise and full support for users with allocation on EMPIRE-AI.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Creates opportunities for the CUNY research community to establish new partnerships with the government and private sectors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Utilizes HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Maintains tickets for all CUNY users with allocation on EAI. &lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==Computing architectures at HPCC==&lt;br /&gt;
&lt;br /&gt;
The HPC Center employs a diverse range of architectures to accommodate intricate and demanding workflows. All computational resources of various types are consolidated into a single hybrid cluster known as Arrow. This cluster comprises symmetric multiprocessor (SMP) nodes equipped with and without GPUs, distributed shared memory (NUMA) node(s), high-memory nodes, and advanced SMP nodes featuring multiple GPUs. The number of GPUs per node varies between two and eight, along with the utilized GPU interface and GPU family. Consequently, the fundamental GPU nodes are equipped with two Tesla K20m GPUs connected via the PCIe interface, while the most advanced nodes support eight Ampere A100 GPUs connected via the SXM interface.    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
CUNY-HPCC, a not-for-profit core research facility affiliated with CUNY, is dedicated to supporting a wide range of research endeavors that necessitate advanced computational resources. &amp;lt;u&amp;gt;Notably, CUNY-HPCC’s operations are not directly or indirectly funded by CUNY or the College of Staten Island (CSI). Consequently, CUNY-HPCC employs a cost recovery model that exclusively recoups operational expenses, without generating any profit for the HPCC.&amp;lt;/u&amp;gt; The recovered costs are meticulously calculated using comprehensive documentation of actual operational expenditures and are designed to achieve a break-even point for all CUNY users. This methodology is approved by CUNY-RF and is employed in other CUNY research facilities. The cost recovery charging schema is based on unit-hour usage, encompassing both CPU and GPU units. Definitions for these units are provided in the accompanying table. &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
==== Minimal Access Plan (MAP) ====&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
==== Compute on demand (CODP) ====&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
==== Lease a public node(s) ====&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
==== Lease on condo node(s) ====&lt;br /&gt;
Users may lease condo nodes for a project, ensuring them 100% access and eliminating any time or job limitations on the leased resource. The minimum lease duration &#039;&#039;&#039;is 30 days (one month).&#039;&#039;&#039; Longer leases (more than 90 days) receive a 10% discount. The lease of a condo node is contingent upon the owner of the required node agreeing to lease for a specific time period and duration. Users interested in leasing a condo node of a particular type must contact the HPCC director for options. Lease fees vary depending on the type of node and are currently (note that prices are reviewed once every six months) between $230 and $1100 per month.   &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans details and examples  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Minimum Access is designed to provide extensive support for research activities across various colleges, foster collaboration between institutions, facilitate the establishment of new research projects, and serve as a testing ground for innovative studies. MAP accounts operate under a stringent fair share policy, which determines the actual waiting time for job allocation in a queue based on the resources utilized by that account in previous cycles. Furthermore, all jobs are subject to strict time constraints. Consequently, extended jobs necessitate the implementation of checkpoints.&lt;br /&gt;
&lt;br /&gt;
The MAP offers three tiers of access:&lt;br /&gt;
&lt;br /&gt;
· A: The Basic tier incurs a yearly fee of $5,000. It is tailored to support users from colleges with limited research activities. The fee covers infrastructure expenses associated with one to two users from these colleges.&lt;br /&gt;
&lt;br /&gt;
· B: The Medium tier incurs a yearly fee of $15,000. It covers infrastructure expenses for up to twelve users from these colleges. Additionally, every account under the Medium tier receives complimentary 11,520 CPU hours and 1,440 GPU hours upon account creation.&lt;br /&gt;
&lt;br /&gt;
· C: The Advanced tier incurs a yearly fee of $25,000. It covers infrastructure expenses for all users from these colleges. Furthermore, every new account from this tier receives complimentary 11,520 CPU hours and 1,440 GPU hours upon account creation.  Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points. &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
= Partitions and jobs =&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
= Hours of Operation =&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
= User Support =&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
= Warnings and modes of operation =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
= User Manual =&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=1000</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=1000"/>
		<updated>2026-04-19T16:34:46Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Condo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) serves as a pivotal research and educational hub for the university. Situated on the campus of the College of Staten Island, located at 2800 Victory Boulevard, Staten Island, New York 10314, the center’s primary objective is to enhance educational opportunities and foster scientific research and discovery within the university. This is achieved through the management of state-of-the-art computing infrastructure and the provision of comprehensive research support services. Notably, CUNY-HPCC offers domain-specific expertise in various aspects of computationally intensive research. Furthermore, CUNY’s membership in the Empire AI (EAI) consortium positions CUNY-HPCC as a stepping stone for CUNY researchers seeking access to EAI advanced facilities.   &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI Consortium comprises the &#039;&#039;&#039;CUNY Graduate Center, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.&#039;&#039;&#039; CUNY-HPCC provides support and maintains tickets for all CUNY users with allocation on EAI. Additionally, CUNY-HPCC serves as a stepping stone for CUNY researchers as it operates (on a smaller scale) architectures (including nodes with Hopper) similar to EAI, including extended “Alpha” servers and new “Beta” computers. The latter will consist of 288 B200 GPUs and recently added RTX 6000 Pro nodes. The anticipated cost for EAI is $0.50 per unit (SU), which will provide CUNY PIs with a rate that is significantly lower than a typical AWS rate. One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU. In comparison, the CUNY-HPCC recovery costs for public servers are $0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units). For further details, please refer to the section on HPCC access plans.     &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
     &lt;br /&gt;
CUNY-HPCC offers a professionally maintained, modern computational environment and architectures, along with advanced storage and fast interconnects. CUNY-HPCC serves the following purposes:     &lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Supports research computing at CUNY, benefiting faculty, their collaborators at other universities, and their public and private sector partners. It also supports CUNY students and research staff.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Provides state-of-the-art computing resources and comprehensive research support services, including expertise and full support for users with allocation on EMPIRE-AI.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Creates opportunities for the CUNY research community to establish new partnerships with the government and private sectors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Utilizes HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Maintains tickets for all CUNY users with allocation on EAI. &lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==Computing architectures at HPCC==&lt;br /&gt;
&lt;br /&gt;
The HPC Center employs a diverse range of architectures to accommodate intricate and demanding workflows. All computational resources of various types are consolidated into a single hybrid cluster known as Arrow. This cluster comprises symmetric multiprocessor (SMP) nodes equipped with and without GPUs, distributed shared memory (NUMA) node(s), high-memory nodes, and advanced SMP nodes featuring multiple GPUs. The number of GPUs per node varies between two and eight, along with the utilized GPU interface and GPU family. Consequently, the fundamental GPU nodes are equipped with two Tesla K20m GPUs connected via the PCIe interface, while the most advanced nodes support eight Ampere A100 GPUs connected via the SXM interface.    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
CUNY-HPCC, a not-for-profit core research facility affiliated with CUNY, is dedicated to supporting a wide range of research endeavors that necessitate advanced computational resources. &amp;lt;u&amp;gt;Notably, CUNY-HPCC’s operations are not directly or indirectly funded by CUNY or the College of Staten Island (CSI). Consequently, CUNY-HPCC employs a cost recovery model that exclusively recoups operational expenses, without generating any profit for the HPCC.&amp;lt;/u&amp;gt; The recovered costs are meticulously calculated using comprehensive documentation of actual operational expenditures and are designed to achieve a break-even point for all CUNY users. This methodology is approved by CUNY-RF and is employed in other CUNY research facilities. The cost recovery charging schema is based on unit-hour usage, encompassing both CPU and GPU units. Definitions for these units are provided in the accompanying table. &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
==== Minimal Access Plan (MAP) ====&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
==== Compute on demand (CODP) ====&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
==== Lease a public node(s) ====&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Lease on condo node(s) ===&lt;br /&gt;
Users may lease condo nodes for a project, ensuring them 100% access and eliminating any time or job limitations on the leased resource. The minimum lease duration &#039;&#039;&#039;is 30 days (one month).&#039;&#039;&#039; Longer leases (more than 90 days) receive a 10% discount. The lease of a condo node is contingent upon the owner of the required node agreeing to lease for a specific time period and duration. Users interested in leasing a condo node of a particular type must contact the HPCC director for options. Lease fees vary depending on the type of node and are currently (note that prices are reviewed once every six months) between $230 and $1100 per month.   &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans details and examples  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Minimum Access is designed to provide extensive support for research activities across various colleges, foster collaboration between institutions, facilitate the establishment of new research projects, and serve as a testing ground for innovative studies. MAP accounts operate under a stringent fair share policy, which determines the actual waiting time for job allocation in a queue based on the resources utilized by that account in previous cycles. Furthermore, all jobs are subject to strict time constraints. Consequently, extended jobs necessitate the implementation of checkpoints.&lt;br /&gt;
&lt;br /&gt;
The MAP offers three tiers of access:&lt;br /&gt;
&lt;br /&gt;
· A: The Basic tier incurs a yearly fee of $5,000. It is tailored to support users from colleges with limited research activities. The fee covers infrastructure expenses associated with one to two users from these colleges.&lt;br /&gt;
&lt;br /&gt;
· B: The Medium tier incurs a yearly fee of $15,000. It covers infrastructure expenses for up to twelve users from these colleges. Additionally, every account under the Medium tier receives complimentary 11,520 CPU hours and 1,440 GPU hours upon account creation.&lt;br /&gt;
&lt;br /&gt;
· C: The Advanced tier incurs a yearly fee of $25,000. It covers infrastructure expenses for all users from these colleges. Furthermore, every new account from this tier receives complimentary 11,520 CPU hours and 1,440 GPU hours upon account creation.  Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points. &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
= Partitions and jobs =&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
= Hours of Operation =&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
= User Support =&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
= Warnings and modes of operation =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
= User Manual =&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=999</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=999"/>
		<updated>2026-04-15T18:52:13Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Recovery of  operational costs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) serves as a pivotal research and educational hub for the university. Situated on the campus of the College of Staten Island, located at 2800 Victory Boulevard, Staten Island, New York 10314, the center’s primary objective is to enhance educational opportunities and foster scientific research and discovery within the university. This is achieved through the management of state-of-the-art computing infrastructure and the provision of comprehensive research support services. Notably, CUNY-HPCC offers domain-specific expertise in various aspects of computationally intensive research. Furthermore, CUNY’s membership in the Empire AI (EAI) consortium positions CUNY-HPCC as a stepping stone for CUNY researchers seeking access to EAI advanced facilities.   &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI Consortium comprises the &#039;&#039;&#039;CUNY Graduate Center, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.&#039;&#039;&#039; CUNY-HPCC provides support and maintains tickets for all CUNY users with allocation on EAI. Additionally, CUNY-HPCC serves as a stepping stone for CUNY researchers as it operates (on a smaller scale) architectures (including nodes with Hopper) similar to EAI, including extended “Alpha” servers and new “Beta” computers. The latter will consist of 288 B200 GPUs and recently added RTX 6000 Pro nodes. The anticipated cost for EAI is $0.50 per unit (SU), which will provide CUNY PIs with a rate that is significantly lower than a typical AWS rate. One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU. In comparison, the CUNY-HPCC recovery costs for public servers are $0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units). For further details, please refer to the section on HPCC access plans.     &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
     &lt;br /&gt;
CUNY-HPCC offers a professionally maintained, modern computational environment and architectures, along with advanced storage and fast interconnects. CUNY-HPCC serves the following purposes:     &lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Supports research computing at CUNY, benefiting faculty, their collaborators at other universities, and their public and private sector partners. It also supports CUNY students and research staff.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Provides state-of-the-art computing resources and comprehensive research support services, including expertise and full support for users with allocation on EMPIRE-AI.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Creates opportunities for the CUNY research community to establish new partnerships with the government and private sectors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Utilizes HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Maintains tickets for all CUNY users with allocation on EAI. &lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==Computing architectures at HPCC==&lt;br /&gt;
&lt;br /&gt;
The HPC Center employs a diverse range of architectures to accommodate intricate and demanding workflows. All computational resources of various types are consolidated into a single hybrid cluster known as Arrow. This cluster comprises symmetric multiprocessor (SMP) nodes equipped with and without GPUs, distributed shared memory (NUMA) node(s), high-memory nodes, and advanced SMP nodes featuring multiple GPUs. The number of GPUs per node varies between two and eight, along with the utilized GPU interface and GPU family. Consequently, the fundamental GPU nodes are equipped with two Tesla K20m GPUs connected via the PCIe interface, while the most advanced nodes support eight Ampere A100 GPUs connected via the SXM interface.    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
CUNY-HPCC, a not-for-profit core research facility affiliated with CUNY, is dedicated to supporting a wide range of research endeavors that necessitate advanced computational resources. &amp;lt;u&amp;gt;Notably, CUNY-HPCC’s operations are not directly or indirectly funded by CUNY or the College of Staten Island (CSI). Consequently, CUNY-HPCC employs a cost recovery model that exclusively recoups operational expenses, without generating any profit for the HPCC.&amp;lt;/u&amp;gt; The recovered costs are meticulously calculated using comprehensive documentation of actual operational expenditures and are designed to achieve a break-even point for all CUNY users. This methodology is approved by CUNY-RF and is employed in other CUNY research facilities. The cost recovery charging schema is based on unit-hour usage, encompassing both CPU and GPU units. Definitions for these units are provided in the accompanying table. &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
==== Minimal Access Plan (MAP) ====&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
==== Compute on demand (CODP) ====&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
==== Lease a public node(s) ====&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans details and examples  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
= Partitions and jobs =&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
= Hours of Operation =&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
= User Support =&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
= Warnings and modes of operation =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
= User Manual =&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=998</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=998"/>
		<updated>2026-04-15T18:50:20Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Computing architectures at HPCC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) serves as a pivotal research and educational hub for the university. Situated on the campus of the College of Staten Island, located at 2800 Victory Boulevard, Staten Island, New York 10314, the center’s primary objective is to enhance educational opportunities and foster scientific research and discovery within the university. This is achieved through the management of state-of-the-art computing infrastructure and the provision of comprehensive research support services. Notably, CUNY-HPCC offers domain-specific expertise in various aspects of computationally intensive research. Furthermore, CUNY’s membership in the Empire AI (EAI) consortium positions CUNY-HPCC as a stepping stone for CUNY researchers seeking access to EAI advanced facilities.   &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI Consortium comprises the &#039;&#039;&#039;CUNY Graduate Center, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.&#039;&#039;&#039; CUNY-HPCC provides support and maintains tickets for all CUNY users with allocation on EAI. Additionally, CUNY-HPCC serves as a stepping stone for CUNY researchers as it operates (on a smaller scale) architectures (including nodes with Hopper) similar to EAI, including extended “Alpha” servers and new “Beta” computers. The latter will consist of 288 B200 GPUs and recently added RTX 6000 Pro nodes. The anticipated cost for EAI is $0.50 per unit (SU), which will provide CUNY PIs with a rate that is significantly lower than a typical AWS rate. One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU. In comparison, the CUNY-HPCC recovery costs for public servers are $0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units). For further details, please refer to the section on HPCC access plans.     &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
     &lt;br /&gt;
CUNY-HPCC offers a professionally maintained, modern computational environment and architectures, along with advanced storage and fast interconnects. CUNY-HPCC serves the following purposes:     &lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Supports research computing at CUNY, benefiting faculty, their collaborators at other universities, and their public and private sector partners. It also supports CUNY students and research staff.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Provides state-of-the-art computing resources and comprehensive research support services, including expertise and full support for users with allocation on EMPIRE-AI.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Creates opportunities for the CUNY research community to establish new partnerships with the government and private sectors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Utilizes HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Maintains tickets for all CUNY users with allocation on EAI. &lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==Computing architectures at HPCC==&lt;br /&gt;
&lt;br /&gt;
The HPC Center employs a diverse range of architectures to accommodate intricate and demanding workflows. All computational resources of various types are consolidated into a single hybrid cluster known as Arrow. This cluster comprises symmetric multiprocessor (SMP) nodes equipped with and without GPUs, distributed shared memory (NUMA) node(s), high-memory nodes, and advanced SMP nodes featuring multiple GPUs. The number of GPUs per node varies between two and eight, along with the utilized GPU interface and GPU family. Consequently, the fundamental GPU nodes are equipped with two Tesla K20m GPUs connected via the PCIe interface, while the most advanced nodes support eight Ampere A100 GPUs connected via the SXM interface.    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
==== Minimal Access Plan (MAP) ====&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
==== Compute on demand (CODP) ====&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
==== Lease a public node(s) ====&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans details and examples  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
= Partitions and jobs =&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
= Hours of Operation =&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
= User Support =&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
= Warnings and modes of operation =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
= User Manual =&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=997</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=997"/>
		<updated>2026-04-15T18:48:40Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Mission of CUNY-HPCC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) serves as a pivotal research and educational hub for the university. Situated on the campus of the College of Staten Island, located at 2800 Victory Boulevard, Staten Island, New York 10314, the center’s primary objective is to enhance educational opportunities and foster scientific research and discovery within the university. This is achieved through the management of state-of-the-art computing infrastructure and the provision of comprehensive research support services. Notably, CUNY-HPCC offers domain-specific expertise in various aspects of computationally intensive research. Furthermore, CUNY’s membership in the Empire AI (EAI) consortium positions CUNY-HPCC as a stepping stone for CUNY researchers seeking access to EAI advanced facilities.   &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI Consortium comprises the &#039;&#039;&#039;CUNY Graduate Center, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.&#039;&#039;&#039; CUNY-HPCC provides support and maintains tickets for all CUNY users with allocation on EAI. Additionally, CUNY-HPCC serves as a stepping stone for CUNY researchers as it operates (on a smaller scale) architectures (including nodes with Hopper) similar to EAI, including extended “Alpha” servers and new “Beta” computers. The latter will consist of 288 B200 GPUs and recently added RTX 6000 Pro nodes. The anticipated cost for EAI is $0.50 per unit (SU), which will provide CUNY PIs with a rate that is significantly lower than a typical AWS rate. One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU. In comparison, the CUNY-HPCC recovery costs for public servers are $0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units). For further details, please refer to the section on HPCC access plans.     &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
     &lt;br /&gt;
CUNY-HPCC offers a professionally maintained, modern computational environment and architectures, along with advanced storage and fast interconnects. CUNY-HPCC serves the following purposes:     &lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Supports research computing at CUNY, benefiting faculty, their collaborators at other universities, and their public and private sector partners. It also supports CUNY students and research staff.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Provides state-of-the-art computing resources and comprehensive research support services, including expertise and full support for users with allocation on EMPIRE-AI.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Creates opportunities for the CUNY research community to establish new partnerships with the government and private sectors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Utilizes HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;*&amp;lt;/nowiki&amp;gt; Maintains tickets for all CUNY users with allocation on EAI. &lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==Computing architectures at HPCC==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
==== Minimal Access Plan (MAP) ====&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
==== Compute on demand (CODP) ====&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
==== Lease a public node(s) ====&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans details and examples  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
= Partitions and jobs =&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
= Hours of Operation =&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
= User Support =&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
= Warnings and modes of operation =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
= User Manual =&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=996</id>
		<title>Administrative Information</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=996"/>
		<updated>2026-04-15T18:45:09Z</updated>

		<summary type="html">&lt;p&gt;Alex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
==How to get an account==&lt;br /&gt;
&lt;br /&gt;
=== Definitions and procedures ===&lt;br /&gt;
The CUNY-HPCC operates on a cost recovery scheme, which mandates that all accounts be linked to research projects or class accounts. Research accounts are those associated with projects sponsored by the Principal Investigator (PI) who leads the project. These accounts are associated with the project’s title, funding, and duration. Class accounts are for the duration of the class and are associated with the instructor or lecturer. &amp;lt;u&amp;gt;No other accounts are permitted.&amp;lt;/u&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A Principal Investigator (PI) at CUNY is defined as the lead researcher responsible for the design, execution, and management of a research project. The PI ensures compliance with regulations and oversees the project’s financial aspects. The PI is a faculty member or a qualified researcher at CUNY who has the authority to apply for research funding and manage the project.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The procedure to open an account is as follows:&lt;br /&gt;
&lt;br /&gt;
Step 1: Creation of a sponsor account (PI account) - form A or B. At this step, the PI must create an account for themselves and provide information about the project title, funding, and duration. The request for resources is not mandatory.&lt;br /&gt;
&lt;br /&gt;
Step 2: Upon creating the account, the PI will receive a unique code that must be shared with members of a group (students and postdocs) who require an account on HPCC.&lt;br /&gt;
&lt;br /&gt;
Step 3: Members of the research group (lab) and academic collaborators can apply for an account at CUNY-HPCC by using form C, D, E, or F. It is mandatory to use the code mentioned in Step 2 (from the CUNY PI) in these forms.&lt;br /&gt;
&lt;br /&gt;
Step 4: The PI should assign students to their project. &lt;br /&gt;
&lt;br /&gt;
===Accounts overview===&lt;br /&gt;
All users of HPCC resources must register with HPCC to obtain an account as outlined in the table below. Users are encouraged to create an account on the HPCC web portal at:&lt;br /&gt;
&lt;br /&gt;
hpchelp.csi.cuny.edu&lt;br /&gt;
&lt;br /&gt;
Each user account is issued to an individual user and should not be shared. HPCC will communicate exclusively via CUNY emails to users from groups A to E. For users from groups F and G, communication will be facilitated through their verified work accounts, CCed to the CUNY collaborator (for group F only). Additionally, if resources are available and at the discretion of the CUNY-HPCC director, external researchers can obtain an external research account (type G) at CUNY-HPCC by renting HPC resources and paying the full cost recovery fee in advance. Please contact the HPCC director for further details. &lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!User accounts for:&lt;br /&gt;
!Type&lt;br /&gt;
!Conditions&lt;br /&gt;
!Renuwal cycles&lt;br /&gt;
!Conditions&lt;br /&gt;
!Mandatory conditions&lt;br /&gt;
|-&lt;br /&gt;
|Faculty, Research Staff&lt;br /&gt;
|A&lt;br /&gt;
|Renews every year at the beginning of Fall semester&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Adjunct Faculty &lt;br /&gt;
|B&lt;br /&gt;
|Renews every semester (Fall/Spring) &lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail.  &lt;br /&gt;
|-&lt;br /&gt;
|Doctoral Graduate Students&lt;br /&gt;
|C&lt;br /&gt;
|Renews every year the beginning of Fall semester&lt;br /&gt;
|14&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY E-mail, For &#039;&#039;&#039;PhD students the first E-mail is their GC E-mail address. Second mail is the college E-mail.&#039;&#039;&#039; &lt;br /&gt;
|-&lt;br /&gt;
|Master Students&lt;br /&gt;
|D&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.&lt;br /&gt;
|CUNY EID, Valid CUNY e-mail. First E-mail is the college E-mail address. &lt;br /&gt;
|-&lt;br /&gt;
|Undergraduate Students &lt;br /&gt;
|E&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 7 days from renewal date, data and home directory are removed after 15 days. No backup of any data. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail. First E-mail is the college E-mail address. &#039;&#039;&#039;These accounts are for class only&#039;&#039;&#039;  &#039;&#039;&#039;Undergraduate students doing research work must have faculty sponsor and be registered in a project led by legitimate PI.&#039;&#039;&#039; &lt;br /&gt;
|-&lt;br /&gt;
|Academic Collaborators&lt;br /&gt;
|F&lt;br /&gt;
|Renews once a year (Fall) for the duration of a project&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|other institution EID, work e-mail and valid CUNY collaborator E-mail&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Public and Private Sector Partners&lt;br /&gt;
|G&lt;br /&gt;
|No Renewal. Good only for the duration of contract. &lt;br /&gt;
|NA&lt;br /&gt;
|Account expires at the date of expiring of the contract. &lt;br /&gt;
|state/federal ID, verified work e-mail.&#039;&#039;&#039;Advanced pay of full cost for rented resource.&#039;&#039;&#039;&lt;br /&gt;
|}  &lt;br /&gt;
&lt;br /&gt;
Users who missed renewal for less than 90 days should communicate with HPCC via e-mail to &#039;&#039;&#039;hpchelp@csi.cuny.edu&#039;&#039;&#039; for account recovery. All users must inform HPCC for changes it their academic status. It is mandatory to specify information (or NA) on all points from below list. Please do not forget to provide information about past and pending &#039;&#039;&#039;&amp;lt;u&amp;gt;publications&amp;lt;/u&amp;gt;&#039;&#039;&#039; and funded projects and &amp;lt;u&amp;gt;information about your local available resources (local servers and workstations/desktops only).&amp;lt;/u&amp;gt;  Think carefully about the resources needed and try to estimate as accurate as possible.  Note that &#039;&#039;&#039;by applying for and obtaining an account, the user agrees to the HPCC End User Policy (EUP) and  Mandatory Security Requirements for Access  (MSRA).&#039;&#039;&#039;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+Required Information for opening of  HPCC account. Please provide information in all fields and/or mark NA when needed. &lt;br /&gt;
!&lt;br /&gt;
! rowspan=&amp;quot;26&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;&amp;lt;big&amp;gt;For All CUNY Faculty, Staff And Graduate Students&amp;lt;/big&amp;gt;&#039;&#039;&#039;  &#039;&#039;&#039;&amp;lt;big&amp;gt;(A ,B,C,D)&amp;lt;/big&amp;gt;&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. John A. Smith  22341356 jsmith@csi.cuny.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|CUNY &#039;&#039;Academic status ( faculty, adjunct faculty, PhD student, MS student, research staff):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Primary&#039;&#039;&#039; Affiliation within CUNY - campus name and Department ( e.g Hunter College, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Secondary&#039;&#039;&#039; CUNY affiliation if any. Provide campus name and Department (e.g. Graduate Center, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Name, department and college affiliation of PI/Advisor (e.g. John Smith, Biology, Hunter College):&lt;br /&gt;
|-&lt;br /&gt;
|If out of College of Staten Island provide description of  local resources available. &lt;br /&gt;
:Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
::College (e.g. Hunter)&lt;br /&gt;
::Type of resource (e.g. Department cluster):&lt;br /&gt;
:::&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-&#039;&#039; &#039;&#039;type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed for the project:&lt;br /&gt;
::- CPU Cores (e.g 1000):&lt;br /&gt;
::- GPU options (e.g 2 x V100/16 GB):&lt;br /&gt;
::- V100/16 GB -&lt;br /&gt;
::- V100/32 GB -  &lt;br /&gt;
::- L40/48 GB - &lt;br /&gt;
::- A30/24 GB -&lt;br /&gt;
::- A40/24 GB - &lt;br /&gt;
::- A100/40 GB - &lt;br /&gt;
::- A100/80 GB - &lt;br /&gt;
::- Storage Space (above 50 GB)&lt;br /&gt;
::- Backup of data (Y/N):&lt;br /&gt;
::- Archive of data (Y/N):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Title of the project:&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Short Description of the  project (up to 100 words): &lt;br /&gt;
|-&lt;br /&gt;
|Funding sources of the project (e.g. NSF grant #, CUNY):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conference presentations, posters and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Number of refereed publications relevant to the project:&lt;br /&gt;
|-&lt;br /&gt;
|Pending publication relevant to the project: &lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;big&amp;gt;&#039;&#039;&#039;&#039;&#039;For All External (not CUNY) Project Collaborators and Researchers (F,G)&#039;&#039;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the state/federal ID or EID from other  Academic Institution (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Affiliation outside CUNY if any(e.g. Rutgers University) and valid professional e-mail: ( e.g. John Doe, Rutgers University, jd@rutgers.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Department at NON CUNY  Academic Institution (e.g. MIS Rutgers):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Non CUNY-email (collaborator/external contact):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the collaborator (Academic: e.g. Professor; Partner: e.g. NVIDIA lab):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the external researcher(s) (e.g. principal architect NVIDIA):&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed (example: Cores 100; Time 10 000 hours Memory per core 8 GB, GPU cores  2 GPU hours 100 Storage 100GB):&lt;br /&gt;
|-&lt;br /&gt;
|Description of available local resources. Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
&#039;&#039;type of computational  (cluster, advanced workstation):&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conferences and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&amp;lt;big&amp;gt;&#039;&#039;&#039;For All CUNY Graduate and Undergraduate Classes (E)&#039;&#039;&#039;&amp;lt;/big&amp;gt;&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. 22341356):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Valid CUNY e-mail.&#039;&#039;&#039; Public emails are not accepted (e.g. azho@cix.csi.cuny.edu):  &lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class ID (e.g. CS 220):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class Section (e.g. 02):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|College (e.g. Baruch College):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Name of the Professor:&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Term (e.g. Fall 2025): &lt;br /&gt;
!&lt;br /&gt;
|}&lt;br /&gt;
Upon creation, every research user account is provided with a home directory of 50 GB (with a maximum of 10,000 files on /global/u) mounted as /global/u/&amp;lt;userid&amp;gt;. If necessary, a user may request an increase in the size of their home directory. The HPC Center will endeavor to accommodate reasonable requests. If you anticipate having more than 10,000 files, please compress several small files into a single larger zip file. Please ensure that only organized and relevant information is stored in your space to optimize the utilization of the existing storage. &lt;br /&gt;
&lt;br /&gt;
Student class accounts (group d) are provided with a home directory of 10 GB. Please note that class accounts and data will be deleted 30 days after the semester concludes, unless otherwise agreed upon. Students are responsible for backing up their own data prior to the end of the semester.&lt;br /&gt;
&lt;br /&gt;
Upon account establishment, only the user has read/write access to their files. The user can modify their UNIX permissions to grant others in their group read/write access to their files.&lt;br /&gt;
&lt;br /&gt;
Kindly inform the HPC Center if user accounts require removal or addition to a specific group. Please refer to the policies outlined below for account management. It is important to note that accounts are not perpetual and will be removed if they are not accessed or active (as detailed below). &lt;br /&gt;
&lt;br /&gt;
=== User accounts policies ===&lt;br /&gt;
The CUNY HPCC implements stringent security measures for user account management. The institution employs a “account period” system. Account periods vary depending on the account type: one year for accounts in A, C, E, and F categories, and one semester for accounts in B and D categories. All accounts undergo periodic reviews, and inactive accounts are promptly removed. All student accounts automatically expire and are deleted after each semester, unless an advisor requests an extension.  &lt;br /&gt;
&lt;br /&gt;
User accounts for groups A, C, E, and F must be renewed annually by September 30th. User accounts in groups B and D must be renewed within two weeks after each semester. Accounts that remain inactive for one account period or are not renewed are automatically disabled or locked and will be deleted 60 days after the lockout. The deletion of a specific account signifies the permanent removal of all data associated with that account.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Reset Password&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Users must use automatic password reset system.  Click on [https://hpcauth1.csi.cuny.edu/reset/ Reset Password].  Upon resetting the users will get their individual security token on the e-mail address registered with HPCC.&lt;br /&gt;
&lt;br /&gt;
===Close of account===&lt;br /&gt;
If a user wishes to close their account, please contact the CUNY at HPCHelp@csi.cuny.edu. Supervisors who wish to modify the access of their researchers and/or students should contact the HPC to remove, add, or modify access. User accounts that are not accessed or renewed for more than one year and one day will be purged along with any data associated with the account. User accounts that are not renewed on time will be locked, and users must contact HPCC to recover access. &lt;br /&gt;
&lt;br /&gt;
=== Message of the day (MOTD) ===&lt;br /&gt;
Users are encouraged to read the &amp;quot;Message of the day” (MOTD), which is displayed to the user upon logging onto a system.  The MOTD provides information on scheduled maintenance time when systems will be unavailable and/or important changes in the environment that are of import to the user community.  The MOTD is the HPC Center’s only efficient mechanism for communicating to the broader user community as bulk email messages are often blocked by CUNY SPAM filters.&lt;br /&gt;
&lt;br /&gt;
===   Required citations ===&lt;br /&gt;
The CUNY HPC Center appreciates the support it has received from the National Science Foundation (NSF).  It is the policy of NSF that researchers who are funded by NSF or who make use of facilities funded by NSF acknowledge the contribution of NSF by including the following citation in their papers and presentations:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;This research was supported, in part, under National Science Foundation Grants: CNS-0958379, CNS-0855217, ACI-1126113 and OAC-2215760 (2022) and the City University of New York High Performance Computing Center at the College of Staten Island.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The HPC Center, therefore, requests its users to follow this procedure as it helps the Center to demonstrate that NSF’s investments aided the research and educational missions of the University.&lt;br /&gt;
&lt;br /&gt;
== Reporting requirements ==&lt;br /&gt;
The Center reports on its support of the research and educational community to both funding agencies and CUNY on an annual basis.  Citations are an important factor that is included in these reports.  Therefore, is mandatory users to send copies of research papers developed, in part, using the HPC Center resources to [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu]. Accounts for users who violate that requirement may not be renewed.  Reporting results obtained with HPC resources also helps the Center to keep abreast of user research requirement directions and needs. &lt;br /&gt;
&lt;br /&gt;
== Funding of computational resources and storage ==&lt;br /&gt;
Systems at HPC Center are purchased with grants from National Science Foundation (NSF), grants from NYC, a grant from DASNY and a grant from CUNY&#039;s office of the CIO. In addition all systems in condo tier are purchased with direct funds from research groups. The largest financial support comes from &#039;&#039;&#039;NSF MRI grants (more than 80% of all funding).&#039;&#039;&#039;  CUNY own investment  constitute &#039;&#039;&#039;8.6%&#039;&#039;&#039; of all funds.  Here is the list of all grants for CUNY-HPCC.  &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;PFSS and GPU Nodes:&#039;&#039;&#039; NSF Grant OAC-2215760 (operational) &lt;br /&gt;
:&#039;&#039;&#039;DSMS&#039;&#039;&#039; NSF Grant ACI-1126113 (server is partially retired) &lt;br /&gt;
:&#039;&#039;&#039;BLUE MOON&#039;&#039;&#039;, Grant NYC 042-ST030-015 (operational)&lt;br /&gt;
:&#039;&#039;&#039;CRYO,&#039;&#039;&#039; Grant DASNY 208684-000 OP (operational) &lt;br /&gt;
:&#039;&#039;&#039;ANDY&#039;&#039;&#039;, NSF Grant CNS-0855217 and the New York City Council through the efforts of Borough President James Oddo ( server is fully retired)&lt;br /&gt;
:&#039;&#039;&#039;APPEL&#039;&#039;&#039;, New York State Regional Economic Development Grant through the efforts of State Senator Diane Savino (operational)&lt;br /&gt;
:&#039;&#039;&#039;PENZIAS&#039;&#039;&#039;, The Office of the CUNY Chief Information Officer ( Server is partially retired)&lt;br /&gt;
:&#039;&#039;&#039;SALK&#039;&#039;&#039;, NSF Grant CNS-0958379 and a New York State Regional Economic Development Grant (Server is fully retired)&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=995</id>
		<title>Administrative Information</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=995"/>
		<updated>2026-04-15T17:35:52Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Definitions and procedures */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
==How to get an account==&lt;br /&gt;
&lt;br /&gt;
=== Definitions and procedures ===&lt;br /&gt;
The CUNY-HPCC operates on a cost recovery scheme, which mandates that all accounts be linked to research projects or class accounts. Research accounts are those associated with projects sponsored by the Principal Investigator (PI) who leads the project. These accounts are associated with the project’s title, funding, and duration. Class accounts are for the duration of the class and are associated with the instructor or lecturer. &amp;lt;u&amp;gt;No other accounts are permitted.&amp;lt;/u&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A Principal Investigator (PI) at CUNY is defined as the lead researcher responsible for the design, execution, and management of a research project. The PI ensures compliance with regulations and oversees the project’s financial aspects. The PI is a faculty member or a qualified researcher at CUNY who has the authority to apply for research funding and manage the project.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The procedure to open an account is as follows:&lt;br /&gt;
&lt;br /&gt;
Step 1: Creation of a sponsor account (PI account) - form A or B. At this step, the PI must create an account for themselves and provide information about the project title, funding, and duration. The request for resources is not mandatory.&lt;br /&gt;
&lt;br /&gt;
Step 2: Upon creating the account, the PI will receive a unique code that must be shared with members of a group (students and postdocs) who require an account on HPCC.&lt;br /&gt;
&lt;br /&gt;
Step 3: Members of the research group (lab) and academic collaborators can apply for an account at CUNY-HPCC by using form C, D, E, or F. It is mandatory to use the code mentioned in Step 2 (from the CUNY PI) in these forms.&lt;br /&gt;
&lt;br /&gt;
Step 4: The PI should assign students to their project. &lt;br /&gt;
&lt;br /&gt;
===Accounts overview===&lt;br /&gt;
All users of HPCC resources must register with HPCC to obtain an account as outlined in the table below. Users are encouraged to create an account on the HPCC web portal at:&lt;br /&gt;
&lt;br /&gt;
hpchelp.csi.cuny.edu&lt;br /&gt;
&lt;br /&gt;
Each user account is issued to an individual user and should not be shared. HPCC will communicate exclusively via CUNY emails to users from groups A to E. For users from groups F and G, communication will be facilitated through their verified work accounts, CCed to the CUNY collaborator (for group F only). Additionally, if resources are available and at the discretion of the CUNY-HPCC director, external researchers can obtain an external research account (type G) at CUNY-HPCC by renting HPC resources and paying the full cost recovery fee in advance. Please contact the HPCC director for further details. &lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!User accounts for:&lt;br /&gt;
!Type&lt;br /&gt;
!Conditions&lt;br /&gt;
!Renuwal cycles&lt;br /&gt;
!Conditions&lt;br /&gt;
!Mandatory conditions&lt;br /&gt;
|-&lt;br /&gt;
|Faculty, Research Staff&lt;br /&gt;
|A&lt;br /&gt;
|Renews every year at the beginning of Fall semester&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Adjunct Faculty &lt;br /&gt;
|B&lt;br /&gt;
|Renews every semester (Fall/Spring) &lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail.  &lt;br /&gt;
|-&lt;br /&gt;
|Doctoral Graduate Students&lt;br /&gt;
|C&lt;br /&gt;
|Renews every year the beginning of Fall semester&lt;br /&gt;
|14&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY E-mail, For &#039;&#039;&#039;PhD students the first E-mail is their GC E-mail address. Second mail is the college E-mail.&#039;&#039;&#039; &lt;br /&gt;
|-&lt;br /&gt;
|Master Students&lt;br /&gt;
|D&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.&lt;br /&gt;
|CUNY EID, Valid CUNY e-mail. First E-mail is the college E-mail address. &lt;br /&gt;
|-&lt;br /&gt;
|Undergraduate Students &lt;br /&gt;
|E&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 7 days from renewal date, data and home directory are removed after 15 days. No backup of any data. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail. First E-mail is the college E-mail address. &#039;&#039;&#039;These accounts are for class only&#039;&#039;&#039;  &#039;&#039;&#039;Undergraduate students doing research work must have faculty sponsor and be registered in a project led by legitimate PI.&#039;&#039;&#039; &lt;br /&gt;
|-&lt;br /&gt;
|Academic Collaborators&lt;br /&gt;
|F&lt;br /&gt;
|Renews once a year (Fall) for the duration of a project&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|other institution EID, work e-mail and valid CUNY collaborator E-mail&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Public and Private Sector Partners&lt;br /&gt;
|G&lt;br /&gt;
|No Renewal. Good only for the duration of contract. &lt;br /&gt;
|NA&lt;br /&gt;
|Account expires at the date of expiring of the contract. &lt;br /&gt;
|state/federal ID, verified work e-mail.&#039;&#039;&#039;Advanced pay of full cost for rented resource.&#039;&#039;&#039;&lt;br /&gt;
|}  &lt;br /&gt;
&lt;br /&gt;
Users who missed renewal for less than 90 days should communicate with HPCC via e-mail to &#039;&#039;&#039;hpchelp@csi.cuny.edu&#039;&#039;&#039; for account recovery. All users must inform HPCC for changes it their academic status. It is mandatory to specify information (or NA) on all points from below list. Please do not forget to provide information about past and pending &#039;&#039;&#039;&amp;lt;u&amp;gt;publications&amp;lt;/u&amp;gt;&#039;&#039;&#039; and funded projects and &amp;lt;u&amp;gt;information about your local available resources (local servers and workstations/desktops only).&amp;lt;/u&amp;gt;  Think carefully about the resources needed and try to estimate as accurate as possible.  Note that &#039;&#039;&#039;by applying for and obtaining an account, the user agrees to the HPCC End User Policy (EUP) and  Mandatory Security Requirements for Access  (MSRA).&#039;&#039;&#039;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+Required Information for opening of  HPCC account. Please provide information in all fields and/or mark NA when needed. &lt;br /&gt;
!&lt;br /&gt;
! rowspan=&amp;quot;26&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;&amp;lt;big&amp;gt;For All CUNY Faculty, Staff And Graduate Students&amp;lt;/big&amp;gt;&#039;&#039;&#039;  &#039;&#039;&#039;&amp;lt;big&amp;gt;(A ,B,C,D)&amp;lt;/big&amp;gt;&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. John A. Smith  22341356 jsmith@csi.cuny.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|CUNY &#039;&#039;Academic status ( faculty, adjunct faculty, PhD student, MS student, research staff):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Primary&#039;&#039;&#039; Affiliation within CUNY - campus name and Department ( e.g Hunter College, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Secondary&#039;&#039;&#039; CUNY affiliation if any. Provide campus name and Department (e.g. Graduate Center, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Name, department and college affiliation of PI/Advisor (e.g. John Smith, Biology, Hunter College):&lt;br /&gt;
|-&lt;br /&gt;
|If out of College of Staten Island provide description of  local resources available. &lt;br /&gt;
:Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
::College (e.g. Hunter)&lt;br /&gt;
::Type of resource (e.g. Department cluster):&lt;br /&gt;
:::&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-&#039;&#039; &#039;&#039;type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed for the project:&lt;br /&gt;
::- CPU Cores (e.g 1000):&lt;br /&gt;
::- GPU options (e.g 2 x V100/16 GB):&lt;br /&gt;
::- V100/16 GB -&lt;br /&gt;
::- V100/32 GB -  &lt;br /&gt;
::- L40/48 GB - &lt;br /&gt;
::- A30/24 GB -&lt;br /&gt;
::- A40/24 GB - &lt;br /&gt;
::- A100/40 GB - &lt;br /&gt;
::- A100/80 GB - &lt;br /&gt;
::- Storage Space (above 50 GB)&lt;br /&gt;
::- Backup of data (Y/N):&lt;br /&gt;
::- Archive of data (Y/N):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Title of the project:&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Short Description of the  project (up to 100 words): &lt;br /&gt;
|-&lt;br /&gt;
|Funding sources of the project (e.g. NSF grant #, CUNY):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conference presentations, posters and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Number of refereed publications relevant to the project:&lt;br /&gt;
|-&lt;br /&gt;
|Pending publication relevant to the project: &lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;big&amp;gt;&#039;&#039;&#039;&#039;&#039;For All External (not CUNY) Project Collaborators and Researchers (F,G)&#039;&#039;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the state/federal ID or EID from other  Academic Institution (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Affiliation outside CUNY if any(e.g. Rutgers University) and valid professional e-mail: ( e.g. John Doe, Rutgers University, jd@rutgers.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Department at NON CUNY  Academic Institution (e.g. MIS Rutgers):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Non CUNY-email (collaborator/external contact):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the collaborator (Academic: e.g. Professor; Partner: e.g. NVIDIA lab):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the external researcher(s) (e.g. principal architect NVIDIA):&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed (example: Cores 100; Time 10 000 hours Memory per core 8 GB, GPU cores  2 GPU hours 100 Storage 100GB):&lt;br /&gt;
|-&lt;br /&gt;
|Description of available local resources. Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
&#039;&#039;type of computational  (cluster, advanced workstation):&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conferences and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&amp;lt;big&amp;gt;&#039;&#039;&#039;For All CUNY Graduate and Undergraduate Classes (E)&#039;&#039;&#039;&amp;lt;/big&amp;gt;&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. 22341356):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Valid CUNY e-mail.&#039;&#039;&#039; Public emails are not accepted (e.g. azho@cix.csi.cuny.edu):  &lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class ID (e.g. CS 220):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class Section (e.g. 02):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|College (e.g. Baruch College):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Name of the Professor:&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Term (e.g. Fall 2025): &lt;br /&gt;
!&lt;br /&gt;
|}&lt;br /&gt;
Upon creation every research user account is provided with 50 GB (and max of 10000 files on /global/u) home directory mounted as &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;. If required, a user may request an increase in the size of their home directory. The HPC Center will endeavor to satisfy reasonable requests. If you expect to have more than 10 000 files please zip several small files into larger single zip file. Please keep only wrangled information in your space in order to optimize use of the existing storage. &lt;br /&gt;
&lt;br /&gt;
Student class accounts ( group d)) are ~~blue:provided~~  with 10 GB home directory.  Please note that class accounts and data will be deleted 30 days after the semester ends (unless otherwise agreed upon). Students are responsible for backing up their own data prior to the end of the semester.&lt;br /&gt;
 &lt;br /&gt;
When a user account is established, only the user has read/write to his files.  The user can change his UNIX permissions to allow others in his group to read/write to his file.&lt;br /&gt;
&lt;br /&gt;
Please be sure to notify the HPC Center if user accounts need to be removed or added to a specific re group. Please read below a policies for accounts. Note that accounts are not timeless and non accessed and non active accounts are removed ( see below).&lt;br /&gt;
&lt;br /&gt;
=== User accounts policies ===&lt;br /&gt;
CUNY HPCC applies strict security standards in user accounts management. HPCC uses “account periods”. Account period is &#039;&#039;&#039;one year&#039;&#039;&#039; for accounts in &#039;&#039;&#039;A,C,E and F&#039;&#039;&#039; type of accounts and &#039;&#039;&#039;one semester&#039;&#039;&#039; for &#039;&#039;&#039;b&#039;&#039;&#039; and &#039;&#039;&#039;d&#039;&#039;&#039; type of accounts. All accounts are periodically reviewed and inactive accounts are removed. All student accounts will expire automatically and  will be removed after each semester unless student’s advisor request extension of the student’s account. All user accounts for groups &#039;&#039;&#039;A,C,E and F&#039;&#039;&#039; must be renewed once a year  by Sept 30th. All user accounts in groups &#039;&#039;&#039;b&#039;&#039;&#039; and &#039;&#039;&#039;d&#039;&#039;&#039; must be renewed within 2 weeks after each semester . Accounts not accessed for one account period  and/or not renewed are automatically disabled/locked and will be deleted 60 days from locking. Delete of particular account means unrecoverable remove of all data associated with that account.  &lt;br /&gt;
&lt;br /&gt;
===Reset Password ===&lt;br /&gt;
&lt;br /&gt;
Users must use automatic password reset system.  Click on [https://hpcauth1.csi.cuny.edu/reset/ Reset Password].  Upon resetting the users will get their individual security token on the e-mail address registered with HPCC.&lt;br /&gt;
&lt;br /&gt;
===Close of account===&lt;br /&gt;
If a user would like to close their account, please contact the CUNY at HPCHelp@csi.cuny.edu. &lt;br /&gt;
Supervisors that would like to modify the access of their researchers and/or students working for them should contact the HPC to remove, add or modify access.&lt;br /&gt;
User accounts that are not accessed or renewed for more than a year and one day will be purged along with any data associated with the account. User accounts that are not renewed on time will be locked and users must contact HPCC to get access recovered. &lt;br /&gt;
&lt;br /&gt;
=== Message of the day (MOTD) ===&lt;br /&gt;
Users are encouraged to read the &amp;quot;Message of the day” (MOTD), which is displayed to the user upon logging onto a system.  The MOTD provides information on scheduled maintenance time when systems will be unavailable and/or important changes in the environment that are of import to the user community.  The MOTD is the HPC Center’s only efficient mechanism for communicating to the broader user community as bulk email messages are often blocked by CUNY SPAM filters.&lt;br /&gt;
&lt;br /&gt;
===   Required citations ===&lt;br /&gt;
The CUNY HPC Center appreciates the support it has received from the National Science Foundation (NSF).  It is the policy of NSF that researchers who are funded by NSF or who make use of facilities funded by NSF acknowledge the contribution of NSF by including the following citation in their papers and presentations:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;This research was supported, in part, under National Science Foundation Grants: CNS-0958379, CNS-0855217, ACI-1126113 and OAC-2215760 (2022) and the City University of New York High Performance Computing Center at the College of Staten Island.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The HPC Center, therefore, requests its users to follow this procedure as it helps the Center to demonstrate that NSF’s investments aided the research and educational missions of the University.&lt;br /&gt;
&lt;br /&gt;
== Reporting requirements ==&lt;br /&gt;
The Center reports on its support of the research and educational community to both funding agencies and CUNY on an annual basis.  Citations are an important factor that is included in these reports.  Therefore, is mandatory users to send copies of research papers developed, in part, using the HPC Center resources to [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu]. Accounts for users who violate that requirement may not be renewed.  Reporting results obtained with HPC resources also helps the Center to keep abreast of user research requirement directions and needs. &lt;br /&gt;
&lt;br /&gt;
== Funding of computational resources and storage ==&lt;br /&gt;
Systems at HPC Center are purchased with grants from National Science Foundation (NSF), grants from NYC, a grant from DASNY and a grant from CUNY&#039;s office of the CIO. In addition all systems in condo tier are purchased with direct funds from research groups. The largest financial support comes from &#039;&#039;&#039;NSF MRI grants (more than 80% of all funding).&#039;&#039;&#039;  CUNY own investment  constitute &#039;&#039;&#039;8.6%&#039;&#039;&#039; of all funds.  Here is the list of all grants for CUNY-HPCC.  &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;PFSS and GPU Nodes:&#039;&#039;&#039; NSF Grant OAC-2215760 (operational) &lt;br /&gt;
:&#039;&#039;&#039;DSMS&#039;&#039;&#039; NSF Grant ACI-1126113 (server is partially retired) &lt;br /&gt;
:&#039;&#039;&#039;BLUE MOON&#039;&#039;&#039;, Grant NYC 042-ST030-015 (operational)&lt;br /&gt;
:&#039;&#039;&#039;CRYO,&#039;&#039;&#039; Grant DASNY 208684-000 OP (operational) &lt;br /&gt;
:&#039;&#039;&#039;ANDY&#039;&#039;&#039;, NSF Grant CNS-0855217 and the New York City Council through the efforts of Borough President James Oddo ( server is fully retired)&lt;br /&gt;
:&#039;&#039;&#039;APPEL&#039;&#039;&#039;, New York State Regional Economic Development Grant through the efforts of State Senator Diane Savino (operational)&lt;br /&gt;
:&#039;&#039;&#039;PENZIAS&#039;&#039;&#039;, The Office of the CUNY Chief Information Officer ( Server is partially retired)&lt;br /&gt;
:&#039;&#039;&#039;SALK&#039;&#039;&#039;, NSF Grant CNS-0958379 and a New York State Regional Economic Development Grant (Server is fully retired)&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=994</id>
		<title>Administrative Information</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=994"/>
		<updated>2026-04-15T17:31:34Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Definitions and procedures */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
==How to get an account==&lt;br /&gt;
&lt;br /&gt;
=== Definitions and procedures ===&lt;br /&gt;
The CUNY-HPCC operates on a cost recovery scheme, which mandates that all accounts be linked to research projects or class accounts. Research accounts are those associated with projects sponsored by the Principal Investigator (PI) who leads the project. These accounts are associated with the project’s title, funding, and duration. Class accounts are for the duration of the class and are associated with the instructor or lecturer. No other accounts are permitted.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A Principal Investigator (PI) at CUNY is defined as the lead researcher responsible for the design, execution, and management of a research project. The PI ensures compliance with regulations and oversees the project’s financial aspects. The PI is a faculty member or a qualified researcher at CUNY who has the authority to apply for research funding and manage the project.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The procedure to open an account is as follows:&lt;br /&gt;
&lt;br /&gt;
Step 1: Creation of a sponsor account (PI account) - form A or B. At this step, the PI must create an account for themselves and provide information about the project title, funding, and duration. The request for resources is not mandatory.&lt;br /&gt;
&lt;br /&gt;
Step 2: Upon creating the account, the PI will receive a unique code that must be shared with members of a group (students and postdocs) who require an account on HPCC.&lt;br /&gt;
&lt;br /&gt;
Step 3: Members of the research group (lab) and academic collaborators can apply for an account at CUNY-HPCC by using form C, D, E, or F. It is mandatory to use the code mentioned in Step 2 (from the CUNY PI) in these forms.&lt;br /&gt;
&lt;br /&gt;
Step 4: The PI should assign students to their project. &lt;br /&gt;
&lt;br /&gt;
===Accounts overview===&lt;br /&gt;
All users of HPCC resources must register with HPCC for an account described in a table below. To do so the users are encouraged to create an account on HPCC web portal: &lt;br /&gt;
&lt;br /&gt;
hpchelp.csi.cuny.edu.    &lt;br /&gt;
 &lt;br /&gt;
A user account is issued to an &#039;&#039;&#039;&#039;&#039;individual user&#039;&#039;&#039;.&#039;&#039; Accounts are &#039;&#039;&#039;not to be shared&#039;&#039;&#039;.  HPCC &#039;&#039;&#039;&amp;lt;u&amp;gt;will communicate only via CUNY e-mails. to users from groups A to E.&amp;lt;/u&amp;gt;&#039;&#039;&#039; HPCC will communicate with users from &#039;&#039;&#039;F,&#039;&#039;&#039; and &#039;&#039;&#039;G&#039;&#039;&#039; account types  via users&#039; verified work account, CC to the CUNY collaborator (for F only). In addition if resources are available and per discretion of CUNY-HPCC director, external to CUNY researchers can obtain external research account (type G) at CUNY-HPCC  by renting HPC resources and paying in advance the full cost recovery fee. Please contact the HPCC director for details. &lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!User accounts for:&lt;br /&gt;
!Type&lt;br /&gt;
!Conditions&lt;br /&gt;
!Renuwal cycles&lt;br /&gt;
!Conditions&lt;br /&gt;
!Mandatory conditions&lt;br /&gt;
|-&lt;br /&gt;
|Faculty, Research Staff&lt;br /&gt;
|A&lt;br /&gt;
|Renews every year at the beginning of Fall semester&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Adjunct Faculty &lt;br /&gt;
|B&lt;br /&gt;
|Renews every semester (Fall/Spring) &lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail.  &lt;br /&gt;
|-&lt;br /&gt;
|Doctoral Graduate Students&lt;br /&gt;
|C&lt;br /&gt;
|Renews every year the beginning of Fall semester&lt;br /&gt;
|14&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY E-mail, For &#039;&#039;&#039;PhD students the first E-mail is their GC E-mail address. Second mail is the college E-mail.&#039;&#039;&#039; &lt;br /&gt;
|-&lt;br /&gt;
|Master Students&lt;br /&gt;
|D&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.&lt;br /&gt;
|CUNY EID, Valid CUNY e-mail. First E-mail is the college E-mail address. &lt;br /&gt;
|-&lt;br /&gt;
|Undergraduate Students &lt;br /&gt;
|E&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 7 days from renewal date, data and home directory are removed after 15 days. No backup of any data. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail. First E-mail is the college E-mail address. &#039;&#039;&#039;These accounts are for class only&#039;&#039;&#039;  &#039;&#039;&#039;Undergraduate students doing research work must have faculty sponsor and be registered in a project led by legitimate PI.&#039;&#039;&#039; &lt;br /&gt;
|-&lt;br /&gt;
|Academic Collaborators&lt;br /&gt;
|F&lt;br /&gt;
|Renews once a year (Fall) for the duration of a project&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|other institution EID, work e-mail and valid CUNY collaborator E-mail&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Public and Private Sector Partners&lt;br /&gt;
|G&lt;br /&gt;
|No Renewal. Good only for the duration of contract. &lt;br /&gt;
|NA&lt;br /&gt;
|Account expires at the date of expiring of the contract. &lt;br /&gt;
|state/federal ID, verified work e-mail.&#039;&#039;&#039;Advanced pay of full cost for rented resource.&#039;&#039;&#039;&lt;br /&gt;
|}  &lt;br /&gt;
&lt;br /&gt;
Users who missed renewal for less than 90 days should communicate with HPCC via e-mail to &#039;&#039;&#039;hpchelp@csi.cuny.edu&#039;&#039;&#039; for account recovery. All users must inform HPCC for changes it their academic status. It is mandatory to specify information (or NA) on all points from below list. Please do not forget to provide information about past and pending &#039;&#039;&#039;&amp;lt;u&amp;gt;publications&amp;lt;/u&amp;gt;&#039;&#039;&#039; and funded projects and &amp;lt;u&amp;gt;information about your local available resources (local servers and workstations/desktops only).&amp;lt;/u&amp;gt;  Think carefully about the resources needed and try to estimate as accurate as possible.  Note that &#039;&#039;&#039;by applying for and obtaining an account, the user agrees to the HPCC End User Policy (EUP) and  Mandatory Security Requirements for Access  (MSRA).&#039;&#039;&#039;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+Required Information for opening of  HPCC account. Please provide information in all fields and/or mark NA when needed. &lt;br /&gt;
!&lt;br /&gt;
! rowspan=&amp;quot;26&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;&amp;lt;big&amp;gt;For All CUNY Faculty, Staff And Graduate Students&amp;lt;/big&amp;gt;&#039;&#039;&#039;  &#039;&#039;&#039;&amp;lt;big&amp;gt;(A ,B,C,D)&amp;lt;/big&amp;gt;&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. John A. Smith  22341356 jsmith@csi.cuny.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|CUNY &#039;&#039;Academic status ( faculty, adjunct faculty, PhD student, MS student, research staff):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Primary&#039;&#039;&#039; Affiliation within CUNY - campus name and Department ( e.g Hunter College, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Secondary&#039;&#039;&#039; CUNY affiliation if any. Provide campus name and Department (e.g. Graduate Center, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Name, department and college affiliation of PI/Advisor (e.g. John Smith, Biology, Hunter College):&lt;br /&gt;
|-&lt;br /&gt;
|If out of College of Staten Island provide description of  local resources available. &lt;br /&gt;
:Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
::College (e.g. Hunter)&lt;br /&gt;
::Type of resource (e.g. Department cluster):&lt;br /&gt;
:::&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-&#039;&#039; &#039;&#039;type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed for the project:&lt;br /&gt;
::- CPU Cores (e.g 1000):&lt;br /&gt;
::- GPU options (e.g 2 x V100/16 GB):&lt;br /&gt;
::- V100/16 GB -&lt;br /&gt;
::- V100/32 GB -  &lt;br /&gt;
::- L40/48 GB - &lt;br /&gt;
::- A30/24 GB -&lt;br /&gt;
::- A40/24 GB - &lt;br /&gt;
::- A100/40 GB - &lt;br /&gt;
::- A100/80 GB - &lt;br /&gt;
::- Storage Space (above 50 GB)&lt;br /&gt;
::- Backup of data (Y/N):&lt;br /&gt;
::- Archive of data (Y/N):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Title of the project:&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Short Description of the  project (up to 100 words): &lt;br /&gt;
|-&lt;br /&gt;
|Funding sources of the project (e.g. NSF grant #, CUNY):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conference presentations, posters and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Number of refereed publications relevant to the project:&lt;br /&gt;
|-&lt;br /&gt;
|Pending publication relevant to the project: &lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;big&amp;gt;&#039;&#039;&#039;&#039;&#039;For All External (not CUNY) Project Collaborators and Researchers (F,G)&#039;&#039;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the state/federal ID or EID from other  Academic Institution (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Affiliation outside CUNY if any(e.g. Rutgers University) and valid professional e-mail: ( e.g. John Doe, Rutgers University, jd@rutgers.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Department at NON CUNY  Academic Institution (e.g. MIS Rutgers):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Non CUNY-email (collaborator/external contact):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the collaborator (Academic: e.g. Professor; Partner: e.g. NVIDIA lab):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the external researcher(s) (e.g. principal architect NVIDIA):&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed (example: Cores 100; Time 10 000 hours Memory per core 8 GB, GPU cores  2 GPU hours 100 Storage 100GB):&lt;br /&gt;
|-&lt;br /&gt;
|Description of available local resources. Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
&#039;&#039;type of computational  (cluster, advanced workstation):&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conferences and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&amp;lt;big&amp;gt;&#039;&#039;&#039;For All CUNY Graduate and Undergraduate Classes (E)&#039;&#039;&#039;&amp;lt;/big&amp;gt;&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. 22341356):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Valid CUNY e-mail.&#039;&#039;&#039; Public emails are not accepted (e.g. azho@cix.csi.cuny.edu):  &lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class ID (e.g. CS 220):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class Section (e.g. 02):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|College (e.g. Baruch College):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Name of the Professor:&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Term (e.g. Fall 2025): &lt;br /&gt;
!&lt;br /&gt;
|}&lt;br /&gt;
Upon creation every research user account is provided with 50 GB (and max of 10000 files on /global/u) home directory mounted as &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;. If required, a user may request an increase in the size of their home directory. The HPC Center will endeavor to satisfy reasonable requests. If you expect to have more than 10 000 files please zip several small files into larger single zip file. Please keep only wrangled information in your space in order to optimize use of the existing storage. &lt;br /&gt;
&lt;br /&gt;
Student class accounts ( group d)) are ~~blue:provided~~  with 10 GB home directory.  Please note that class accounts and data will be deleted 30 days after the semester ends (unless otherwise agreed upon). Students are responsible for backing up their own data prior to the end of the semester.&lt;br /&gt;
 &lt;br /&gt;
When a user account is established, only the user has read/write to his files.  The user can change his UNIX permissions to allow others in his group to read/write to his file.&lt;br /&gt;
&lt;br /&gt;
Please be sure to notify the HPC Center if user accounts need to be removed or added to a specific re group. Please read below a policies for accounts. Note that accounts are not timeless and non accessed and non active accounts are removed ( see below).&lt;br /&gt;
&lt;br /&gt;
=== User accounts policies ===&lt;br /&gt;
CUNY HPCC applies strict security standards in user accounts management. HPCC uses “account periods”. Account period is &#039;&#039;&#039;one year&#039;&#039;&#039; for accounts in &#039;&#039;&#039;A,C,E and F&#039;&#039;&#039; type of accounts and &#039;&#039;&#039;one semester&#039;&#039;&#039; for &#039;&#039;&#039;b&#039;&#039;&#039; and &#039;&#039;&#039;d&#039;&#039;&#039; type of accounts. All accounts are periodically reviewed and inactive accounts are removed. All student accounts will expire automatically and  will be removed after each semester unless student’s advisor request extension of the student’s account. All user accounts for groups &#039;&#039;&#039;A,C,E and F&#039;&#039;&#039; must be renewed once a year  by Sept 30th. All user accounts in groups &#039;&#039;&#039;b&#039;&#039;&#039; and &#039;&#039;&#039;d&#039;&#039;&#039; must be renewed within 2 weeks after each semester . Accounts not accessed for one account period  and/or not renewed are automatically disabled/locked and will be deleted 60 days from locking. Delete of particular account means unrecoverable remove of all data associated with that account.  &lt;br /&gt;
&lt;br /&gt;
===Reset Password ===&lt;br /&gt;
&lt;br /&gt;
Users must use automatic password reset system.  Click on [https://hpcauth1.csi.cuny.edu/reset/ Reset Password].  Upon resetting the users will get their individual security token on the e-mail address registered with HPCC.&lt;br /&gt;
&lt;br /&gt;
===Close of account===&lt;br /&gt;
If a user would like to close their account, please contact the CUNY at HPCHelp@csi.cuny.edu. &lt;br /&gt;
Supervisors that would like to modify the access of their researchers and/or students working for them should contact the HPC to remove, add or modify access.&lt;br /&gt;
User accounts that are not accessed or renewed for more than a year and one day will be purged along with any data associated with the account. User accounts that are not renewed on time will be locked and users must contact HPCC to get access recovered. &lt;br /&gt;
&lt;br /&gt;
=== Message of the day (MOTD) ===&lt;br /&gt;
Users are encouraged to read the &amp;quot;Message of the day” (MOTD), which is displayed to the user upon logging onto a system.  The MOTD provides information on scheduled maintenance time when systems will be unavailable and/or important changes in the environment that are of import to the user community.  The MOTD is the HPC Center’s only efficient mechanism for communicating to the broader user community as bulk email messages are often blocked by CUNY SPAM filters.&lt;br /&gt;
&lt;br /&gt;
===   Required citations ===&lt;br /&gt;
The CUNY HPC Center appreciates the support it has received from the National Science Foundation (NSF).  It is the policy of NSF that researchers who are funded by NSF or who make use of facilities funded by NSF acknowledge the contribution of NSF by including the following citation in their papers and presentations:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;This research was supported, in part, under National Science Foundation Grants: CNS-0958379, CNS-0855217, ACI-1126113 and OAC-2215760 (2022) and the City University of New York High Performance Computing Center at the College of Staten Island.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The HPC Center, therefore, requests its users to follow this procedure as it helps the Center to demonstrate that NSF’s investments aided the research and educational missions of the University.&lt;br /&gt;
&lt;br /&gt;
== Reporting requirements ==&lt;br /&gt;
The Center reports on its support of the research and educational community to both funding agencies and CUNY on an annual basis.  Citations are an important factor that is included in these reports.  Therefore, is mandatory users to send copies of research papers developed, in part, using the HPC Center resources to [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu]. Accounts for users who violate that requirement may not be renewed.  Reporting results obtained with HPC resources also helps the Center to keep abreast of user research requirement directions and needs. &lt;br /&gt;
&lt;br /&gt;
== Funding of computational resources and storage ==&lt;br /&gt;
Systems at HPC Center are purchased with grants from National Science Foundation (NSF), grants from NYC, a grant from DASNY and a grant from CUNY&#039;s office of the CIO. In addition all systems in condo tier are purchased with direct funds from research groups. The largest financial support comes from &#039;&#039;&#039;NSF MRI grants (more than 80% of all funding).&#039;&#039;&#039;  CUNY own investment  constitute &#039;&#039;&#039;8.6%&#039;&#039;&#039; of all funds.  Here is the list of all grants for CUNY-HPCC.  &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;PFSS and GPU Nodes:&#039;&#039;&#039; NSF Grant OAC-2215760 (operational) &lt;br /&gt;
:&#039;&#039;&#039;DSMS&#039;&#039;&#039; NSF Grant ACI-1126113 (server is partially retired) &lt;br /&gt;
:&#039;&#039;&#039;BLUE MOON&#039;&#039;&#039;, Grant NYC 042-ST030-015 (operational)&lt;br /&gt;
:&#039;&#039;&#039;CRYO,&#039;&#039;&#039; Grant DASNY 208684-000 OP (operational) &lt;br /&gt;
:&#039;&#039;&#039;ANDY&#039;&#039;&#039;, NSF Grant CNS-0855217 and the New York City Council through the efforts of Borough President James Oddo ( server is fully retired)&lt;br /&gt;
:&#039;&#039;&#039;APPEL&#039;&#039;&#039;, New York State Regional Economic Development Grant through the efforts of State Senator Diane Savino (operational)&lt;br /&gt;
:&#039;&#039;&#039;PENZIAS&#039;&#039;&#039;, The Office of the CUNY Chief Information Officer ( Server is partially retired)&lt;br /&gt;
:&#039;&#039;&#039;SALK&#039;&#039;&#039;, NSF Grant CNS-0958379 and a New York State Regional Economic Development Grant (Server is fully retired)&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=993</id>
		<title>Administrative Information</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=993"/>
		<updated>2026-04-15T17:21:58Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Definitions and procedures */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
==How to get an account==&lt;br /&gt;
&lt;br /&gt;
=== Definitions and procedures ===&lt;br /&gt;
CUNY-HPCC operates on cost recovery scheme. That &#039;&#039;&#039;demands all accounts to be associated with research project(s) or to be class accounts.&#039;&#039;&#039; The accounts associated with project are called &#039;&#039;&#039;research accounts and are are sponsored by Principle Investigator (PI) that leads that project(s).&#039;&#039;&#039; The class accounts are for the duration of the class and are associated with the teacher/lecturer of the class. No individual accounts of any other type are  possible. A &#039;&#039;&#039;Principal Investigator (PI) at CUNY is defined as the lead researcher responsible for the design, execution, and management of a research project, ensuring compliance with regulations and overseeing the project&#039;s financial aspects. PI is a &amp;lt;u&amp;gt;faculty member or a qualified researcher&amp;lt;/u&amp;gt; who has the authority to apply for research funding and manage the project.&#039;&#039;&#039;  The procedure to open an account is as follows: &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1.&#039;&#039;&#039;  Creation of sponsor account (PI account) - &#039;&#039;&#039;form A or B below.&#039;&#039;&#039; At that step the PI must create account for him/her and provide information about project title, funding and duration. The request of resources is not mandatory.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2.&#039;&#039;&#039;  Upon creating account the PI will get unique code which has to be shared with members of a group (students and post docs) that require account on HPCC.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3.&#039;&#039;&#039;  Members of research group (lab) and academic collaborators can apply for account at CUNY-HPCC by using form C,D, E or F It is mandatory to use a code mentioned in Step 2 (from CUNY PI) in these forms.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 4.&#039;&#039;&#039; PI should assign students to his project.   &lt;br /&gt;
&lt;br /&gt;
===Accounts overview===&lt;br /&gt;
All users of HPCC resources must register with HPCC for an account described in a table below. To do so the users are encouraged to create an account on HPCC web portal: &lt;br /&gt;
&lt;br /&gt;
hpchelp.csi.cuny.edu.    &lt;br /&gt;
 &lt;br /&gt;
A user account is issued to an &#039;&#039;&#039;&#039;&#039;individual user&#039;&#039;&#039;.&#039;&#039; Accounts are &#039;&#039;&#039;not to be shared&#039;&#039;&#039;.  HPCC &#039;&#039;&#039;&amp;lt;u&amp;gt;will communicate only via CUNY e-mails. to users from groups A to E.&amp;lt;/u&amp;gt;&#039;&#039;&#039; HPCC will communicate with users from &#039;&#039;&#039;F,&#039;&#039;&#039; and &#039;&#039;&#039;G&#039;&#039;&#039; account types  via users&#039; verified work account, CC to the CUNY collaborator (for F only). In addition if resources are available and per discretion of CUNY-HPCC director, external to CUNY researchers can obtain external research account (type G) at CUNY-HPCC  by renting HPC resources and paying in advance the full cost recovery fee. Please contact the HPCC director for details. &lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!User accounts for:&lt;br /&gt;
!Type&lt;br /&gt;
!Conditions&lt;br /&gt;
!Renuwal cycles&lt;br /&gt;
!Conditions&lt;br /&gt;
!Mandatory conditions&lt;br /&gt;
|-&lt;br /&gt;
|Faculty, Research Staff&lt;br /&gt;
|A&lt;br /&gt;
|Renews every year at the beginning of Fall semester&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Adjunct Faculty &lt;br /&gt;
|B&lt;br /&gt;
|Renews every semester (Fall/Spring) &lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail.  &lt;br /&gt;
|-&lt;br /&gt;
|Doctoral Graduate Students&lt;br /&gt;
|C&lt;br /&gt;
|Renews every year the beginning of Fall semester&lt;br /&gt;
|14&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY E-mail, For &#039;&#039;&#039;PhD students the first E-mail is their GC E-mail address. Second mail is the college E-mail.&#039;&#039;&#039; &lt;br /&gt;
|-&lt;br /&gt;
|Master Students&lt;br /&gt;
|D&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.&lt;br /&gt;
|CUNY EID, Valid CUNY e-mail. First E-mail is the college E-mail address. &lt;br /&gt;
|-&lt;br /&gt;
|Undergraduate Students &lt;br /&gt;
|E&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 7 days from renewal date, data and home directory are removed after 15 days. No backup of any data. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail. First E-mail is the college E-mail address. &#039;&#039;&#039;These accounts are for class only&#039;&#039;&#039;  &#039;&#039;&#039;Undergraduate students doing research work must have faculty sponsor and be registered in a project led by legitimate PI.&#039;&#039;&#039; &lt;br /&gt;
|-&lt;br /&gt;
|Academic Collaborators&lt;br /&gt;
|F&lt;br /&gt;
|Renews once a year (Fall) for the duration of a project&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|other institution EID, work e-mail and valid CUNY collaborator E-mail&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Public and Private Sector Partners&lt;br /&gt;
|G&lt;br /&gt;
|No Renewal. Good only for the duration of contract. &lt;br /&gt;
|NA&lt;br /&gt;
|Account expires at the date of expiring of the contract. &lt;br /&gt;
|state/federal ID, verified work e-mail.&#039;&#039;&#039;Advanced pay of full cost for rented resource.&#039;&#039;&#039;&lt;br /&gt;
|}  &lt;br /&gt;
&lt;br /&gt;
Users who missed renewal for less than 90 days should communicate with HPCC via e-mail to &#039;&#039;&#039;hpchelp@csi.cuny.edu&#039;&#039;&#039; for account recovery. All users must inform HPCC for changes it their academic status. It is mandatory to specify information (or NA) on all points from below list. Please do not forget to provide information about past and pending &#039;&#039;&#039;&amp;lt;u&amp;gt;publications&amp;lt;/u&amp;gt;&#039;&#039;&#039; and funded projects and &amp;lt;u&amp;gt;information about your local available resources (local servers and workstations/desktops only).&amp;lt;/u&amp;gt;  Think carefully about the resources needed and try to estimate as accurate as possible.  Note that &#039;&#039;&#039;by applying for and obtaining an account, the user agrees to the HPCC End User Policy (EUP) and  Mandatory Security Requirements for Access  (MSRA).&#039;&#039;&#039;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+Required Information for opening of  HPCC account. Please provide information in all fields and/or mark NA when needed. &lt;br /&gt;
!&lt;br /&gt;
! rowspan=&amp;quot;26&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;&amp;lt;big&amp;gt;For All CUNY Faculty, Staff And Graduate Students&amp;lt;/big&amp;gt;&#039;&#039;&#039;  &#039;&#039;&#039;&amp;lt;big&amp;gt;(A ,B,C,D)&amp;lt;/big&amp;gt;&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. John A. Smith  22341356 jsmith@csi.cuny.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|CUNY &#039;&#039;Academic status ( faculty, adjunct faculty, PhD student, MS student, research staff):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Primary&#039;&#039;&#039; Affiliation within CUNY - campus name and Department ( e.g Hunter College, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Secondary&#039;&#039;&#039; CUNY affiliation if any. Provide campus name and Department (e.g. Graduate Center, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Name, department and college affiliation of PI/Advisor (e.g. John Smith, Biology, Hunter College):&lt;br /&gt;
|-&lt;br /&gt;
|If out of College of Staten Island provide description of  local resources available. &lt;br /&gt;
:Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
::College (e.g. Hunter)&lt;br /&gt;
::Type of resource (e.g. Department cluster):&lt;br /&gt;
:::&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-&#039;&#039; &#039;&#039;type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed for the project:&lt;br /&gt;
::- CPU Cores (e.g 1000):&lt;br /&gt;
::- GPU options (e.g 2 x V100/16 GB):&lt;br /&gt;
::- V100/16 GB -&lt;br /&gt;
::- V100/32 GB -  &lt;br /&gt;
::- L40/48 GB - &lt;br /&gt;
::- A30/24 GB -&lt;br /&gt;
::- A40/24 GB - &lt;br /&gt;
::- A100/40 GB - &lt;br /&gt;
::- A100/80 GB - &lt;br /&gt;
::- Storage Space (above 50 GB)&lt;br /&gt;
::- Backup of data (Y/N):&lt;br /&gt;
::- Archive of data (Y/N):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Title of the project:&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Short Description of the  project (up to 100 words): &lt;br /&gt;
|-&lt;br /&gt;
|Funding sources of the project (e.g. NSF grant #, CUNY):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conference presentations, posters and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Number of refereed publications relevant to the project:&lt;br /&gt;
|-&lt;br /&gt;
|Pending publication relevant to the project: &lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;big&amp;gt;&#039;&#039;&#039;&#039;&#039;For All External (not CUNY) Project Collaborators and Researchers (F,G)&#039;&#039;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the state/federal ID or EID from other  Academic Institution (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Affiliation outside CUNY if any(e.g. Rutgers University) and valid professional e-mail: ( e.g. John Doe, Rutgers University, jd@rutgers.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Department at NON CUNY  Academic Institution (e.g. MIS Rutgers):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Non CUNY-email (collaborator/external contact):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the collaborator (Academic: e.g. Professor; Partner: e.g. NVIDIA lab):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the external researcher(s) (e.g. principal architect NVIDIA):&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed (example: Cores 100; Time 10 000 hours Memory per core 8 GB, GPU cores  2 GPU hours 100 Storage 100GB):&lt;br /&gt;
|-&lt;br /&gt;
|Description of available local resources. Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
&#039;&#039;type of computational  (cluster, advanced workstation):&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conferences and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&amp;lt;big&amp;gt;&#039;&#039;&#039;For All CUNY Graduate and Undergraduate Classes (E)&#039;&#039;&#039;&amp;lt;/big&amp;gt;&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. 22341356):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Valid CUNY e-mail.&#039;&#039;&#039; Public emails are not accepted (e.g. azho@cix.csi.cuny.edu):  &lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class ID (e.g. CS 220):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class Section (e.g. 02):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|College (e.g. Baruch College):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Name of the Professor:&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Term (e.g. Fall 2025): &lt;br /&gt;
!&lt;br /&gt;
|}&lt;br /&gt;
Upon creation every research user account is provided with 50 GB (and max of 10000 files on /global/u) home directory mounted as &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;. If required, a user may request an increase in the size of their home directory. The HPC Center will endeavor to satisfy reasonable requests. If you expect to have more than 10 000 files please zip several small files into larger single zip file. Please keep only wrangled information in your space in order to optimize use of the existing storage. &lt;br /&gt;
&lt;br /&gt;
Student class accounts ( group d)) are ~~blue:provided~~  with 10 GB home directory.  Please note that class accounts and data will be deleted 30 days after the semester ends (unless otherwise agreed upon). Students are responsible for backing up their own data prior to the end of the semester.&lt;br /&gt;
 &lt;br /&gt;
When a user account is established, only the user has read/write to his files.  The user can change his UNIX permissions to allow others in his group to read/write to his file.&lt;br /&gt;
&lt;br /&gt;
Please be sure to notify the HPC Center if user accounts need to be removed or added to a specific re group. Please read below a policies for accounts. Note that accounts are not timeless and non accessed and non active accounts are removed ( see below).&lt;br /&gt;
&lt;br /&gt;
=== User accounts policies ===&lt;br /&gt;
CUNY HPCC applies strict security standards in user accounts management. HPCC uses “account periods”. Account period is &#039;&#039;&#039;one year&#039;&#039;&#039; for accounts in &#039;&#039;&#039;A,C,E and F&#039;&#039;&#039; type of accounts and &#039;&#039;&#039;one semester&#039;&#039;&#039; for &#039;&#039;&#039;b&#039;&#039;&#039; and &#039;&#039;&#039;d&#039;&#039;&#039; type of accounts. All accounts are periodically reviewed and inactive accounts are removed. All student accounts will expire automatically and  will be removed after each semester unless student’s advisor request extension of the student’s account. All user accounts for groups &#039;&#039;&#039;A,C,E and F&#039;&#039;&#039; must be renewed once a year  by Sept 30th. All user accounts in groups &#039;&#039;&#039;b&#039;&#039;&#039; and &#039;&#039;&#039;d&#039;&#039;&#039; must be renewed within 2 weeks after each semester . Accounts not accessed for one account period  and/or not renewed are automatically disabled/locked and will be deleted 60 days from locking. Delete of particular account means unrecoverable remove of all data associated with that account.  &lt;br /&gt;
&lt;br /&gt;
===Reset Password ===&lt;br /&gt;
&lt;br /&gt;
Users must use automatic password reset system.  Click on [https://hpcauth1.csi.cuny.edu/reset/ Reset Password].  Upon resetting the users will get their individual security token on the e-mail address registered with HPCC.&lt;br /&gt;
&lt;br /&gt;
===Close of account===&lt;br /&gt;
If a user would like to close their account, please contact the CUNY at HPCHelp@csi.cuny.edu. &lt;br /&gt;
Supervisors that would like to modify the access of their researchers and/or students working for them should contact the HPC to remove, add or modify access.&lt;br /&gt;
User accounts that are not accessed or renewed for more than a year and one day will be purged along with any data associated with the account. User accounts that are not renewed on time will be locked and users must contact HPCC to get access recovered. &lt;br /&gt;
&lt;br /&gt;
=== Message of the day (MOTD) ===&lt;br /&gt;
Users are encouraged to read the &amp;quot;Message of the day” (MOTD), which is displayed to the user upon logging onto a system.  The MOTD provides information on scheduled maintenance time when systems will be unavailable and/or important changes in the environment that are of import to the user community.  The MOTD is the HPC Center’s only efficient mechanism for communicating to the broader user community as bulk email messages are often blocked by CUNY SPAM filters.&lt;br /&gt;
&lt;br /&gt;
===   Required citations ===&lt;br /&gt;
The CUNY HPC Center appreciates the support it has received from the National Science Foundation (NSF).  It is the policy of NSF that researchers who are funded by NSF or who make use of facilities funded by NSF acknowledge the contribution of NSF by including the following citation in their papers and presentations:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;This research was supported, in part, under National Science Foundation Grants: CNS-0958379, CNS-0855217, ACI-1126113 and OAC-2215760 (2022) and the City University of New York High Performance Computing Center at the College of Staten Island.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The HPC Center, therefore, requests its users to follow this procedure as it helps the Center to demonstrate that NSF’s investments aided the research and educational missions of the University.&lt;br /&gt;
&lt;br /&gt;
== Reporting requirements ==&lt;br /&gt;
The Center reports on its support of the research and educational community to both funding agencies and CUNY on an annual basis.  Citations are an important factor that is included in these reports.  Therefore, is mandatory users to send copies of research papers developed, in part, using the HPC Center resources to [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu]. Accounts for users who violate that requirement may not be renewed.  Reporting results obtained with HPC resources also helps the Center to keep abreast of user research requirement directions and needs. &lt;br /&gt;
&lt;br /&gt;
== Funding of computational resources and storage ==&lt;br /&gt;
Systems at HPC Center are purchased with grants from National Science Foundation (NSF), grants from NYC, a grant from DASNY and a grant from CUNY&#039;s office of the CIO. In addition all systems in condo tier are purchased with direct funds from research groups. The largest financial support comes from &#039;&#039;&#039;NSF MRI grants (more than 80% of all funding).&#039;&#039;&#039;  CUNY own investment  constitute &#039;&#039;&#039;8.6%&#039;&#039;&#039; of all funds.  Here is the list of all grants for CUNY-HPCC.  &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;PFSS and GPU Nodes:&#039;&#039;&#039; NSF Grant OAC-2215760 (operational) &lt;br /&gt;
:&#039;&#039;&#039;DSMS&#039;&#039;&#039; NSF Grant ACI-1126113 (server is partially retired) &lt;br /&gt;
:&#039;&#039;&#039;BLUE MOON&#039;&#039;&#039;, Grant NYC 042-ST030-015 (operational)&lt;br /&gt;
:&#039;&#039;&#039;CRYO,&#039;&#039;&#039; Grant DASNY 208684-000 OP (operational) &lt;br /&gt;
:&#039;&#039;&#039;ANDY&#039;&#039;&#039;, NSF Grant CNS-0855217 and the New York City Council through the efforts of Borough President James Oddo ( server is fully retired)&lt;br /&gt;
:&#039;&#039;&#039;APPEL&#039;&#039;&#039;, New York State Regional Economic Development Grant through the efforts of State Senator Diane Savino (operational)&lt;br /&gt;
:&#039;&#039;&#039;PENZIAS&#039;&#039;&#039;, The Office of the CUNY Chief Information Officer ( Server is partially retired)&lt;br /&gt;
:&#039;&#039;&#039;SALK&#039;&#039;&#039;, NSF Grant CNS-0958379 and a New York State Regional Economic Development Grant (Server is fully retired)&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Training_%26_Workshops&amp;diff=992</id>
		<title>Training &amp; Workshops</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Training_%26_Workshops&amp;diff=992"/>
		<updated>2026-03-23T19:31:53Z</updated>

		<summary type="html">&lt;p&gt;Alex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The CUNY HPCC provides training course and organizes seminars on various HPC topics.  The training courses are provided at no cost and may be held at any CUNY campus site, at the CUNY HPCC at College of Staten Island, or at the Graduate Center.  The training course at the Graduate Center and its  &#039;&#039;&#039;online&#039;&#039;&#039;  version is  course on parallel programming  and use of HPC architectures.  The on-site course takes place if enough students express interest. It covers various topics from basic SLURM scripting to basic GPU programming to intermediate parallel programming with use of MPI and OpenACC. Please note that lectures cover each topic systematically so particular topic may be discussed  in several lectures. Users who do want to attend the course should send an e-mail to [[mailto:hpchelp@.csi.cuny.edu hpchelp@csi.cuny.edu]]  and ask for registration. All participants will get student account on CUNY-HPCC servers unless they already have one.    &lt;br /&gt;
&lt;br /&gt;
In addition HPCC provides in person and zoom consultations with individuals or small groups of users every Wednesday - 11AM to 3 PM.  The interested users should register by sending e-mail to alex.tzanov@csi.cuny.edu by Mon on the same week. These consultations should help new users and those with no experience to start quickly with HPCC resources. At that time users may discuss their particular problems and get guidance in development of their own parallel scientific code(s).  Please send a mail to [[mailto:hpchelp@.csi.cuny.edu hpchelp@csi.cuny.edu]] or to  [[mailto:alexander.tzanov@.csi.cuny.edu]] for available time slots not later than 3PM on Mon.  HPCC will make all efforts to accommodate all users so any time slot may be shared by several users. &lt;br /&gt;
&lt;br /&gt;
For any additional information, please send an email to [[mailto:hpchelp@.csi.cuny.edu hpchelp@csi.cuny.edu]]. Every Wednesday between 11 AM and 2 PM, HPCC conducts remote help session/consultation. Please sent request for invite to hpchelp@csi.cuny.edu. &amp;lt;u&amp;gt;&#039;&#039;&#039;&#039;&#039;Note that the consultation is open only for CUNY faculty, students and staff.&#039;&#039;&#039;&#039;&#039;&amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Schedule &amp;lt;/h2&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
CUNY High-performance Computing Center (HPCC) provides Help Desk/Consultation support and lectures at the Graduate Center on programming and using Unix-based HPC cluster systems.&amp;lt;br/&amp;gt;&lt;br /&gt;
Dr. Alex Tzanov will conduct the lectures and consulattions. Please see schedule below.  &amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;0&amp;quot; width=&amp;quot;621&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; rowspan=&amp;quot;2&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Date&amp;lt;/strong&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
      &amp;lt;strong&amp;gt; &amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;&amp;amp;nbsp;&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; rowspan=&amp;quot;2&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Day&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;Lecture&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;Consultation&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;Room 4434 10 AM - 12 AM&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;Room 4411, GC&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;&amp;amp;nbsp;&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Introduction to HPC and HPCC&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Introduction to parallel programming &amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Distributed parallel programming with MPI part 1&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Distributed Parallel programming with MPI part 2&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Distributed Parallel programming with MPI part 3&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Distributed Parallel programming with MPI part 4&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Distributed Parallel   programming with MPI part 5&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt; Distributes Parallel   programming with MPI – Hands on &amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;GPGPU programming part 1.&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;GPGPU programming part 2&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;GPGPU - hands on &amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Easy GPU programming with OpenACC  part 1. &amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Easy GPU programming with  OpenACC  part 2 &amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;ONLINE&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
 &amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apart from that the HPCC center provides a short introductory course at Graduate center for new users. The course covers HPCC structure&lt;br /&gt;
and workflow, HPC servers information, basic SLURM scripting, basic Linux and Unix commands, how to compile and run the program on HPCC &lt;br /&gt;
servers and basics of data storage and management system. For more information please contact hpchelp@csi.cuny.edu.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Training_%26_Workshops&amp;diff=991</id>
		<title>Training &amp; Workshops</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Training_%26_Workshops&amp;diff=991"/>
		<updated>2026-03-23T19:18:43Z</updated>

		<summary type="html">&lt;p&gt;Alex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The CUNY HPCC provides training course and organizes seminars on various HPC topics.  The training courses are provided at no cost and may be held at any CUNY campus site, at the CUNY HPCC at College of Staten Island, or at the Graduate Center.  The training course at the Graduate Center and its  &#039;&#039;&#039;online&#039;&#039;&#039;  version is  course on parallel programming  and use of HPC architectures.  The on-site course takes place if enough students express interest. It covers various topics from basic SLURM scripting to basic GPU programming to intermediate parallel programming with use of MPI and OpenACC. Please note that lectures cover each topic systematically so particular topic may be discussed  in several lectures. Users who do want to attend the course should send an e-mail to [[mailto:hpchelp@.csi.cuny.edu hpchelp@csi.cuny.edu]]  and ask for registration. All participants will get student account on CUNY-HPCC servers unless they already have one.    &lt;br /&gt;
&lt;br /&gt;
In addition HPCC provides in person and zoom consultations with individuals or small groups of users every Wednesday - 11AM to 3 PM.  The interested users should register by sending e-mail to alex.tzanov@csi.cuny.edu by Mon on the same week. These consultations should help new users and those with no experience to start quickly with HPCC resources. At that time users may discuss their particular problems and get guidance in development of their own parallel scientific code(s).  Please send a mail to [[mailto:hpchelp@.csi.cuny.edu hpchelp@csi.cuny.edu]] or to  [[mailto:alexander.tzanov@.csi.cuny.edu]] for available time slots not later than 3PM on Mon.  HPCC will make all efforts to accommodate all users so any time slot may be shared by several users. &lt;br /&gt;
&lt;br /&gt;
For any additional information, please send an email to [[mailto:hpchelp@.csi.cuny.edu hpchelp@csi.cuny.edu]].&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Schedule &amp;lt;/h2&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
CUNY High-performance Computing Center (HPCC) provides Help Desk/Consultation support and lectures at the Graduate Center on programming and using Unix-based HPC cluster systems.&amp;lt;br/&amp;gt;&lt;br /&gt;
Dr. Alex Tzanov will conduct the lectures and consulattions. Please see schedule below.  &amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;0&amp;quot; width=&amp;quot;621&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; rowspan=&amp;quot;2&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Date&amp;lt;/strong&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
      &amp;lt;strong&amp;gt; &amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;&amp;amp;nbsp;&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; rowspan=&amp;quot;2&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Day&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;Lecture&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;Consultation&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;Room 4434 10 AM - 12 AM&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;Room 4411, GC&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;&amp;amp;nbsp;&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Introduction to HPC and HPCC&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Introduction to parallel programming &amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Distributed parallel programming with MPI part 1&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Distributed Parallel programming with MPI part 2&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Distributed Parallel programming with MPI part 3&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Distributed Parallel programming with MPI part 4&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Distributed Parallel   programming with MPI part 5&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt; Distributes Parallel   programming with MPI – Hands on &amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;GPGPU programming part 1.&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;GPGPU programming part 2&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;GPGPU - hands on &amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Easy GPU programming with OpenACC  part 1. &amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;1 PM - 5 PM&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;72&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;48&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;WED&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;303&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;Easy GPU programming with  OpenACC  part 2 &amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;td width=&amp;quot;188&amp;quot; valign=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;p align=&amp;quot;center&amp;quot;&amp;gt;ONLINE&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
 &amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Apart from that the HPCC center provides a short introductory course at Graduate center for new users. The course covers HPCC structure&lt;br /&gt;
and workflow, HPC servers information, basic SLURM scripting, basic Linux and Unix commands, how to compile and run the program on HPCC &lt;br /&gt;
servers and basics of data storage and management system. For more information please contact hpchelp@csi.cuny.edu.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=990</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=990"/>
		<updated>2026-03-14T15:27:23Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Recovery of  operational costs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==Computing architectures at HPCC==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
==== Minimal Access Plan (MAP) ====&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
==== Compute on demand (CODP) ====&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
==== Lease a public node(s) ====&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans details and examples  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
= Partitions and jobs =&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
= Hours of Operation =&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
= User Support =&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
= Warnings and modes of operation =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
= User Manual =&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=989</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=989"/>
		<updated>2026-03-14T15:26:24Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Minimal Access Plan (MAP) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==Computing architectures at HPCC==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
==== Minimal Access Plan (MAP) ====&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
==== Compute on demand (CODP) ====&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
==== Lease a public node(s) ====&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
= Partitions and jobs =&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
= Hours of Operation =&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
= User Support =&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
= Warnings and modes of operation =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
= User Manual =&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=988</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=988"/>
		<updated>2026-03-14T15:26:01Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Minimal Access Plan (MAP) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==Computing architectures at HPCC==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
=== Minimal Access Plan (MAP) ===&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
==== Compute on demand (CODP) ====&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
==== Lease a public node(s) ====&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
= Partitions and jobs =&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
= Hours of Operation =&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
= User Support =&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
= Warnings and modes of operation =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
= User Manual =&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=987</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=987"/>
		<updated>2026-03-14T15:25:33Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Minimal Access Plan (MAP) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==Computing architectures at HPCC==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
=== Minimal Access Plan (MAP) ===&lt;br /&gt;
&lt;br /&gt;
==== Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows: ====&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
==== Compute on demand (CODP) ====&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
==== Lease a public node(s) ====&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
= Partitions and jobs =&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
= Hours of Operation =&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
= User Support =&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
= Warnings and modes of operation =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
= User Manual =&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=986</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=986"/>
		<updated>2026-03-14T15:24:50Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Compute on public resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==Computing architectures at HPCC==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
=== Minimal Access Plan (MAP) ===&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
=== Compute on demand (CODP) ===&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
=== Lease a public node(s) ===&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
= Partitions and jobs =&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
= Hours of Operation =&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
= User Support =&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
= Warnings and modes of operation =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
= User Manual =&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=985</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=985"/>
		<updated>2026-03-14T15:20:09Z</updated>

		<summary type="html">&lt;p&gt;Alex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==Computing architectures at HPCC==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
=== Compute on demand (CODP) ===&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
=== Lease a public node(s) ===&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo resources ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
= Partitions and jobs =&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
= Hours of Operation =&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
= User Support =&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
= Warnings and modes of operation =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
= User Manual =&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=984</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=984"/>
		<updated>2026-03-14T15:16:57Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Hours of Operation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
=== Compute on demand (CODP) ===&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
=== Lease a public node(s) ===&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo resources ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
= Partitions and jobs =&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
= Hours of Operation =&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
= User Support =&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
= Warnings and modes of operation =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
= User Manual =&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=983</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=983"/>
		<updated>2026-03-14T15:15:32Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Partitions and jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
=== Compute on demand (CODP) ===&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
=== Lease a public node(s) ===&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo resources ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
= Partitions and jobs =&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=982</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=982"/>
		<updated>2026-03-14T15:13:48Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Support for research grants */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
=== Compute on demand (CODP) ===&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
=== Lease a public node(s) ===&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo resources ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Support for research grants =&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=981</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=981"/>
		<updated>2026-03-14T15:13:13Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Recovery of  operational costs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
= Recovery of  operational costs =&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
=== Compute on demand (CODP) ===&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
=== Lease a public node(s) ===&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo resources ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=980</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=980"/>
		<updated>2026-03-14T15:12:33Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Organization of systems and data storage (architecture) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
== Organization of systems and data storage (architecture) ==&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
=== Compute on demand (CODP) ===&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
=== Lease a public node(s) ===&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo resources ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=979</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=979"/>
		<updated>2026-03-14T15:12:04Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Organization of systems and data storage (architecture) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
= Organization of systems and data storage (architecture) =&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
=== Compute on demand (CODP) ===&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
=== Lease a public node(s) ===&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo resources ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=978</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=978"/>
		<updated>2026-03-14T15:04:10Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Lease a public node(s) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
=== Compute on demand (CODP) ===&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
=== Lease a public node(s) ===&lt;br /&gt;
Users may lease a node(s) for project. That ensures them 100% access and no  time or job limitations over leased resource. The minimum lease time is 30 days (one month). Longe leases (more than 90 days) have 10% discount. &#039;&#039;&#039;MAP users&#039;&#039;&#039; are charged between $172 and $950 per month (see below), depending on the type of node. &#039;&#039;&#039;Non-MAP&#039;&#039;&#039; users are charged between $249.58 and $1,399 per month, depending on the type of node. Please see below for details and examples. &lt;br /&gt;
&lt;br /&gt;
=== Condo resources ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see table below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users example&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=977</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=977"/>
		<updated>2026-03-14T14:58:15Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Compute public resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
&lt;br /&gt;
=== Compute on public resources ===&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
=== Compute on demand (CODP) ===&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
=== Lease a public node(s) ===&lt;br /&gt;
MAP users are charged between $172 and $950 per month (see below), depending on the type of node. Non-MAP users are charged between $249.58 and $1,399 per month, depending on the type of node.&lt;br /&gt;
&lt;br /&gt;
=== Condo resources ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=976</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=976"/>
		<updated>2026-03-14T14:57:45Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Recovery of  operational costs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
&lt;br /&gt;
=== Compute public resources ===&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
The cost recovery model for public (non-condominium) servers offers the following options:&lt;br /&gt;
&lt;br /&gt;
# Minimal Access Plan (MAP)&lt;br /&gt;
# Compute on Demand (CODP)&lt;br /&gt;
# Lease a node(s)&lt;br /&gt;
&lt;br /&gt;
Colleges may participate in any of the Minimal Access Plan tiers. The tiered pricing structure is as follows:&lt;br /&gt;
&lt;br /&gt;
- Tier A: $5,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier B: $10,000 per year&lt;br /&gt;
&lt;br /&gt;
- Tier C: $25,000 per year&lt;br /&gt;
&lt;br /&gt;
Within each tier, the cost is &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; It is important to note that the Minimum Access Plan (MAP) tiers A, B, and C are &#039;&#039;&#039;not all-inclusive.&#039;&#039;&#039; Therefore, even if a college pays for a higher tier, it does not guarantee unlimited use of all college employees, faculty, and students throughout the year.&lt;br /&gt;
&lt;br /&gt;
# The MAP plan is indirectly linked to the number of hours used. Consequently, the definition of “up to 12 users for B tier” should not be interpreted as “all users up to 12 per college receive unlimited access to HPCC resources.”&lt;br /&gt;
# The number of users per tier is determined by statistical analysis of resource usage and statistics for the average duration of a job across all CUNY institutions. This means that if the number of users from a college exceeds the number of hours encoded in the MAP fee, the additional hours will be charged at the preferred rate of $0.015 per CPU hour and $0.09 per GPU thread hour.&lt;br /&gt;
# Furthermore, the &amp;lt;u&amp;gt;MAP fee may fully cover the expenses for individuals from a given college for a year,&amp;lt;/u&amp;gt; but it may also not. This depends on the actual usage and type of resources, as well as the number of additional individuals from the same college who appear during the year. It is crucial to understand that allocation on HPCC is not restricted; our focus is solely on the completion of tasks. &lt;br /&gt;
# The free 11,540 CPU hours and 1,440 GPU hours are allocated per PI account and project and are available only for MAP-B and MAP-C (see below). These free hours are intended to facilitate project establishment. Consequently, the PI can request that a member or members of a group utilize time and explore new research opportunities. For instance, upon creating his account the PI X will receive free hours. If he/she/whatever hires a graduate student(s), they may share these free hours if they work on the same project.&lt;br /&gt;
# It is important to note that each GPU requires at least four CPU cores to operate. Therefore, if a user requests one GPU thread, it equates to one GPU thread and four CPU threads, which is equivalent to $0.15 per unit hour for that unit (units are explained above). Note that not all GPU support virtualization, so unit may include the whole GPU depend on used GPU type.  &lt;br /&gt;
&lt;br /&gt;
=== Compute on demand (CODP) ===&lt;br /&gt;
Users from colleges that do not participate in MAP (A,B,C) are charged &#039;&#039;&#039;$0.018 per CPU hour and $0.11 per GPU hour.&#039;&#039;&#039; There is no free time associated with CODP.  &lt;br /&gt;
&lt;br /&gt;
=== Lease a public node(s) ===&lt;br /&gt;
MAP users are charged between $172 and $950 per month (see below), depending on the type of node. Non-MAP users are charged between $249.58 and $1,399 per month, depending on the type of node.&lt;br /&gt;
&lt;br /&gt;
=== Condo resources ===&lt;br /&gt;
Condo users only pay for infrastructure support.  The annual fee depends on the type of node and ranges from $1,540 to $4,520 per year. Please see below for details. &lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
Storage costs are $60 per TB per year, backup costs are $45 per TB per year, and archive costs are $35 per TB per year. The first 50 GB of scratch storage are free. Prices are calculated at the end of each month.&lt;br /&gt;
&lt;br /&gt;
# Additionally, note that &#039;&#039;&#039;file transfers from/to HPCC are free&#039;&#039;&#039;, but it is important to consider the CUNY network speed, which is significantly slower than modern standards.  This will impact the time required to download large data sets. For large data HPCC utilizes and recommend to use secure parallel download via Globus.&lt;br /&gt;
# All services associated with data and storage provided by HPCC are free for CUNY users.&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing public node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease node(s) for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt;-MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and monthly lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Data_Storage_and_Management_System&amp;diff=975</id>
		<title>Data Storage and Management System</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Data_Storage_and_Management_System&amp;diff=975"/>
		<updated>2026-03-13T00:46:13Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* /scratch */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This file system resides on &#039;&#039;&#039;Hybrid Parallel File System (HPFS).&#039;&#039;&#039;  &lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==&amp;quot;Home&amp;quot; directories are on &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; is partition in parallel high performance Linux file system based on HPE parallel file system (HPFS).  It holds the home directories of &#039;&#039;&#039;all&#039;&#039;&#039; individual users. When users request and are granted an allocation of HPC resources, they are assigned a &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; and a 100 GB allocation of disk space for home directories on &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;. These &#039;&#039;&#039;home&#039;&#039;&#039; directories are on global file system mounted only on login nodes. There is no local storage on nodes . That means that 1. data can be accesses only from  PFFS and login node(s) 2. no local storage on nodes is available. Codes that write intermediate results on disk are typically slow and should be run only on condo nodes that have local disk storage.  All home directories are backed up on weekly basis.&lt;br /&gt;
&lt;br /&gt;
==&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;==&lt;br /&gt;
/Scratch is fast file system from which all jobs start. There is no quota on scratch, but &#039;&#039;&#039;scratch&#039;&#039;&#039; files are temporary and are &#039;&#039;&#039;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;not backed up&amp;lt;/font color&amp;gt;&#039;&#039;&#039;.  Single user cannot however take  whole space with his/her data. That means users can run data sets that exceed their home space but they cannot use /scratch for storage. It is important to understand that &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; &amp;lt;u&amp;gt;must be used only for submitting jobs.&amp;lt;/u&amp;gt; Output from jobs may ONLY temporarily be stored on &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch (up to 10 days). Consequently in&amp;lt;/font&amp;gt;&#039;&#039;&#039; order to submit a job for execution, a user must &#039;&#039;&#039;stage&#039;&#039;&#039; or &#039;&#039;&#039;mount&#039;&#039;&#039; the files required by the job to &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; using UNIX commands and/or from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039; using &#039;&#039;&#039;iRODS&#039;&#039;&#039; commands. The later is mounting  data directly from project space. Note that SR is slower file system so large files is better to be staged from /home directories.  Files in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; are &#039;&#039;&#039;automatically relocated to local archive&#039;&#039;&#039; when  their inactive residence on scratch exceeds 90 days. Local archive has limited capacity and serves as data buffer, so &#039;&#039;&#039;strict policies of cleaning up temporary archive are in place&#039;&#039;&#039;. Upon relocation the user will get  a warning ( via e-mail) and must either move files to his/her home directory or to SR1. Note that files left in temp archive will be purged after 30 days.  Users must not &#039;&#039;&#039;store valuable data or compiled codes on scratch&#039;&#039;&#039; since these are static type of files. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==“Project” directories==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;“Project”&#039;&#039;&#039; directories are managed through &#039;&#039;&#039;iRODS&#039;&#039;&#039; and accessible through iRODS commands, not standard UNIX commands.   In iRODS terminology, a “collection” is the equivalent of “directory”. &lt;br /&gt;
&lt;br /&gt;
A &#039;&#039;&#039;“Project”&#039;&#039;&#039; is an activity that usually involves multiple users and/or many individual data files.  A &#039;&#039;&#039;“Project”&#039;&#039;&#039; is normally led by a “Principal Investigator” (PI), who is a faculty member or a research scientist.   The PI is the individual responsible to the University or a granting agency for the “Project”.  The PI has overall responsibility for “Project” data and “Project” data management. To establish a Project, the PI completes and submits the online “Project Application Form”. Project data are stored on project space on main file system. Valuable parts of the projects must be curated by PI&#039;s and stored in HPCC long term archive.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Typical Workflow==&lt;br /&gt;
Typical workflows for Penzias Appel and Karle in are described below:&lt;br /&gt;
&lt;br /&gt;
1. Copying files from a user’s home directory or from &#039;&#039;&#039;SR1&#039;&#039;&#039; to &#039;&#039;&#039;SCRATCH&#039;&#039;&#039;.&amp;lt;br /&amp;gt;&lt;br /&gt;
If working with &#039;&#039;&#039;HOME&#039;&#039;&#039;:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   cd /scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
   mkdir &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt; &amp;amp;&amp;amp; cd &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt;&lt;br /&gt;
   cp /global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font&amp;gt;/a.out ./&lt;br /&gt;
   cp /global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mydatafile &amp;lt;/font&amp;gt;./&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If working with &#039;&#039;&#039;SR1 (storage repository)&#039;&#039;&#039;:&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
   cd /scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
   mkdir &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt; &amp;amp;&amp;amp; cd &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt;&lt;br /&gt;
   iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/a.out &lt;br /&gt;
   iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mydatafile&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Prepare SLURM job script. Typical SLURM sript is similar to the following:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   #!/bin/bash &lt;br /&gt;
   #SBATCH --partition production &lt;br /&gt;
   #SBATCH -J test &lt;br /&gt;
   #SBATCH --nodes 1 &lt;br /&gt;
   #SBATCH --ntasks 8 &lt;br /&gt;
   #SBATCH --mem 4000&lt;br /&gt;
   echo &amp;quot;Starting…&amp;quot; &lt;br /&gt;
&lt;br /&gt;
   cd $SLURM_SUBMIT_DIR&lt;br /&gt;
   mpirun -np 4 ./a.out ./&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mydatafile&amp;lt;/font color&amp;gt; &amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myoutputs&amp;lt;/font color&amp;gt;&lt;br /&gt;
   echo &amp;quot;Done…&amp;quot;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
Your SLURM may be different depending on your needs. Read section Submitting Jobs for a reference.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Run the job &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   sbatch ./&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_script&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Once job is finished, clean up &#039;&#039;&#039;SCRATCH&#039;&#039;&#039; and store outputs in your user home directory or in &#039;&#039;&#039;SR1&#039;&#039;&#039;.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If working with &#039;&#039;&#039;HOME&#039;&#039;&#039;:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   mv ./myoutputs /global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/.&lt;br /&gt;
   cd ../&lt;br /&gt;
   rm -rf &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt; &amp;lt;/font&amp;gt;&lt;br /&gt;
== iRODS (The iRODS Section is in REVIEW and may not be CURRENT) ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;iRODS&#039;&#039;&#039; is the integrated Rule-Oriented Data-management System, a&lt;br /&gt;
community-driven, open source, data grid software solution. &#039;&#039;&#039;iRODS&#039;&#039;&#039; is&lt;br /&gt;
designed to abstract data services from data storage hardware and&lt;br /&gt;
provide users with hardware-agnostic way to manipulate data. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;iRODS&#039;&#039;&#039; is a primary tool that is used by the CUNY HPCC users to&lt;br /&gt;
seamlessly access 1PB storage resource (further referenced as &#039;&#039;&#039;SR1&#039;&#039;&#039;&lt;br /&gt;
here) from any of the HPCC&#039;s computational systems.&lt;br /&gt;
&lt;br /&gt;
Access to &#039;&#039;&#039;SR1&#039;&#039;&#039; is provided via so-called &#039;&#039;&#039;i-commands&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;iinit&lt;br /&gt;
ils&lt;br /&gt;
imv&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comprehesive list of i-commands with detailed description can be&lt;br /&gt;
obtained at [https://wiki.irods.org/index.php/icommands iRODS wiki].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To obtain quick help on any of the commands while being logged into&lt;br /&gt;
any of the HPCC&#039;s machines type &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;i-command -h&#039;&#039;&#039;&amp;lt;/font&amp;gt;. For example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ils -h&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
Following is the list of some of the most relevant &#039;&#039;&#039;i-commands&#039;&#039;&#039;:&lt;br /&gt;
  &lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iinit&amp;lt;/font&amp;gt;&#039;&#039;&#039; -- Initialize session and store your password in a scrambled form for automatic use by other icommands.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iput&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Store a file&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iget&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Get a file&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;imkdir&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like mkdir, make an iRODS collection (similar to a directory or Windows folder)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ichmod&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like chmod, allow (or later restrict) access to your data objects by other users.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;icp&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like cp or rcp, copy an iRODS data object&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;irm&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like rm, remove an iRODS data object&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ils&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like ls, list iRODS data objects (files) and collections (directories)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ipwd&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like pwd, print the iRODS current working directory&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;icd&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like cd, change the iRODS current working directory&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ichksum&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Checksum one or more data-object or collection from iRODS space.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;imv&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Moves/renames an irods data-object or collection.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;irmtrash&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Remove one or more data-object or collection from a RODS trash bin.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;imeta&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Add, remove, list, or query user-defined Attribute-Value-Unit triplets metadata&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iquest&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Query (pose a question to) the ICAT, via a SQL-like interface&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Before using any of the i-commands users need to identify themselves to the iRODS server running command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# iinit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and providing HPCC&#039;s password. &lt;br /&gt;
&lt;br /&gt;
Typical workflow that involves operations on files stored in SR1&lt;br /&gt;
include storing/getting data to and from SR1, tagging data with &lt;br /&gt;
metadata, searching for data, sharing (setting permissions). &lt;br /&gt;
&lt;br /&gt;
==== Storing data to SR1 ====&lt;br /&gt;
 &lt;br /&gt;
1. Create &#039;&#039;&#039;iRODS&#039;&#039;&#039; directory (aka &#039;collection&#039;):&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   # imkdir &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
2. Store all files &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;myfile*&#039;&#039;&#039;&#039;&amp;lt;/font face&amp;gt; into this directory (collection):&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   # iput -r &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt; myfile* myProject&amp;lt;/font color&amp;gt;/.&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
3. Verify that files are stored:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   # ils&lt;br /&gt;
   /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;:&lt;br /&gt;
   C- /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
   # ils &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
   /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;:&lt;br /&gt;
      myfile1&lt;br /&gt;
      myfile2&lt;br /&gt;
      myfile3&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
Symbol &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;C-&#039;&amp;lt;/font&amp;gt;&#039;&#039;&#039; in the beginning of output of &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;ils&#039;&amp;lt;/font&amp;gt;&#039;&#039;&#039; shows that listed item is a collection.&lt;br /&gt;
&lt;br /&gt;
4. Combining &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;ils&#039;, &#039;imkdir&#039;, &#039;iput&#039;, &#039;icp&#039;, &#039;ipwd&#039;, &#039;imv&#039;&#039;&#039;&#039;&amp;lt;/font&amp;gt; user can create iRODS directories and store files in them similarly to what is normally done with UNIX commands &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;ls&#039;, &#039;mkdir&#039;, &#039;cp&#039;, &#039;pwd&#039;, &#039;mv&#039;&#039;&#039;&#039;&amp;lt;/font&amp;gt; etc...&lt;br /&gt;
&lt;br /&gt;
==== Getting data from SR1 ====&lt;br /&gt;
&lt;br /&gt;
1. To copy file from SR1 to current working directory run&lt;br /&gt;
   # iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myfile1&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Now listing current working directory should reveal &#039;&#039;&#039;myfile1&#039;&#039;&#039;:&lt;br /&gt;
   # ls&lt;br /&gt;
   &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myfile1&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Instead of individual files the whole directory (with&lt;br /&gt;
sub-directories) can be copied with &#039;&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;-r&amp;lt;/font&amp;gt;&#039;&#039;&#039;&#039; flag (stands for&lt;br /&gt;
&#039;recursive&#039;)&lt;br /&gt;
   # iget -r &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NOTE: wildcards are not supported, therefore the command below &amp;lt;u&amp;gt;will not work&amp;lt;/u&amp;gt;&lt;br /&gt;
   # iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myfile&amp;lt;/font color&amp;gt;*&lt;br /&gt;
&lt;br /&gt;
=== Tagging data with metadata ===&lt;br /&gt;
   &lt;br /&gt;
iRODS provides users with extremely powerful mechanism of managing&lt;br /&gt;
data with metadata. While working with large datasets it&#039;s&lt;br /&gt;
sometimes easy to forget what is stored in this or the other file.&lt;br /&gt;
Metadata tags help organizing data in a very easy and reliable&lt;br /&gt;
manner.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s tag files from previous example with some metadata:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta add -d myProject/myfile1 zvalue 15 meters&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile1 colorLabel RED&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile1 comment &amp;quot;This is file number 1&amp;quot;&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile2 zvalue 10 meters&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile2 colorLabel RED&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile2 comment &amp;quot;This is file number 2&amp;quot;&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile3 zvalue 15 meters&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile3 colorLabel BLUE&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile3 comment &amp;quot;This is file number 3&amp;quot;&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here we&#039;ve tagged myfile1 with 3 metadata labels:&lt;br /&gt;
&lt;br /&gt;
- zvalue 10 meters&lt;br /&gt;
&lt;br /&gt;
- colorlabel RED&lt;br /&gt;
&lt;br /&gt;
- comment &amp;quot;This is file number 1&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
Similar tags were added to &#039;myfile2&#039; and &#039;myfile3&#039;&lt;br /&gt;
&lt;br /&gt;
Metadata come in form of AVU -- Attribute|Value|Unit. As seen from&lt;br /&gt;
the above examples Unit is not necessary. &lt;br /&gt;
&lt;br /&gt;
Let&#039;s list all metadata assigned to file &#039;myfie1&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta ls -d myProject/myfile1&lt;br /&gt;
AVUs defined for dataObj myProject/myfile1:&lt;br /&gt;
attribute: zvalue&lt;br /&gt;
value: 15&lt;br /&gt;
units: meters&lt;br /&gt;
----&lt;br /&gt;
attribute: colorLabel&lt;br /&gt;
value: RED&lt;br /&gt;
units:&lt;br /&gt;
----&lt;br /&gt;
attribute: comment&lt;br /&gt;
value: This is file number 1&lt;br /&gt;
units:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
To remove an AVU assigned to a file run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta rm -d myProject/myfile1 zvalue 15 meters&lt;br /&gt;
# imeta ls -d myProject/myfile1&lt;br /&gt;
AVUs defined for dataObj myProject/myfile1:&lt;br /&gt;
attribute: colorLabel&lt;br /&gt;
value: RED&lt;br /&gt;
units:&lt;br /&gt;
----&lt;br /&gt;
attribute: comment&lt;br /&gt;
value: This is file number 1&lt;br /&gt;
units:&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# imeta add -d myProject/myfile1 zvalue 15 meters&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Metadata may be assigned to directories as well:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta add -C myProject simulationsPool 1&lt;br /&gt;
# imeta ls -C myProject&lt;br /&gt;
AVUs defined for collection myProject:&lt;br /&gt;
attribute: simulationsPool&lt;br /&gt;
value: 1&lt;br /&gt;
units:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the &#039;-C&#039; key that is used instead of &#039;-d&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Searching for data ===&lt;br /&gt;
&lt;br /&gt;
Power of metadata becomes obvious when data needs to be found in&lt;br /&gt;
large collections. Here is an illustration of how easy this task is&lt;br /&gt;
done with iRODS via imeta queries:&lt;br /&gt;
&lt;br /&gt;
 # imeta qu -d zvalue = 15&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject&lt;br /&gt;
 dataObj: myfile1&lt;br /&gt;
 ----&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject&lt;br /&gt;
 dataObj: myfile3&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We see both files that were tagged with label &#039;zvalue 10 meters&#039;.&lt;br /&gt;
Here is different query:&lt;br /&gt;
 &lt;br /&gt;
 # imeta qu -d colorLabel = RED&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/myProject&lt;br /&gt;
 dataObj: myfile1&lt;br /&gt;
 ----&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject&lt;br /&gt;
 dataObj: myfile2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another powerful mechanism to query data is provided with &#039;iquest&#039;. &lt;br /&gt;
Following is a number of examples that show &#039;iquest&#039; capabilities:&lt;br /&gt;
 &lt;br /&gt;
 iquest &amp;quot;SELECT DATA_NAME, DATA_SIZE WHERE DATA_RESC_NAME like &#039;cuny%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;For %-12.12s size is %s&amp;quot; &amp;quot;SELECT DATA_NAME ,  DATA_SIZE  WHERE COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;SELECT COLL_NAME WHERE COLL_NAME like &#039;/cunyZone/home/%&#039; AND USER_NAME like &#039;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;User %-6.6s has %-5.5s access to file %s&amp;quot; &amp;quot;SELECT USER_NAME,  DATA_ACCESS_NAME, DATA_NAME WHERE COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot; %-5.5s access has been given to user %-6.6s for the file %s&amp;quot; &amp;quot;SELECT DATA_ACCESS_NAME, USER_NAME, DATA_NAME WHERE COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest no-distinct &amp;quot;select META_DATA_ATTR_NAME&amp;quot;&lt;br /&gt;
 iquest  &amp;quot;select COLL_NAME, DATA_NAME WHERE DATA_NAME like &#039;myfile%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;User %-9.9s uses %14.14s bytes in %8.8s files in &#039;%s&#039;&amp;quot; &amp;quot;SELECT USER_NAME, sum(DATA_SIZE),count(DATA_NAME),RESC_NAME&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select sum(DATA_SIZE) where COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select sum(DATA_SIZE) where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select sum(DATA_SIZE), RESC_NAME where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select order_desc(DATA_ID) where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select count(DATA_ID) where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select RESC_NAME where RESC_CLASS_NAME IN (&#039;bundle&#039;,&#039;archive&#039;)&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select DATA_NAME,DATA_SIZE where DATA_SIZE BETWEEN &#039;100000&#039; &#039;100200&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Sharing data ===&lt;br /&gt;
&lt;br /&gt;
Access to the data can be controlled via &#039;ichmod&#039; command. It&#039;s&lt;br /&gt;
behavior is similar to UNIX &#039;chmod&#039; command. For example if there is a&lt;br /&gt;
need to provide user &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&#039;&#039;&#039;&amp;lt;userid&amp;gt;&#039;&#039;&#039;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt; with read access to file&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;myProject/myfile1&#039;&#039;&#039;&amp;lt;/font&amp;gt; execute the following command:&lt;br /&gt;
   ichmod read &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid1&amp;gt;&amp;lt;/font color&amp;gt; myProject/myfile1&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see who has access to a file/directory use:&lt;br /&gt;
   # ils -A myProject/myfile1&lt;br /&gt;
   /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject/myfile1&lt;br /&gt;
   ACL - &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid1&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
   #cunyZone:read object   &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;#cunyZone:own&lt;br /&gt;
&lt;br /&gt;
In the above example user &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid1&amp;gt;&amp;lt;/font color&amp;gt;&#039;&#039;&#039;&amp;lt;/font&amp;gt; has read access to the file and&lt;br /&gt;
user &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; is an owner of the file. &lt;br /&gt;
&lt;br /&gt;
Possible levels of access to a data object are null/read/write/own.&lt;br /&gt;
==Backups (IN REVIEW)==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Backups.&#039;&#039;&#039;	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; user directories. Project files are backed up automatically to a remote tape silo system over a fiber optic network.  Backups are performed daily. &lt;br /&gt;
&lt;br /&gt;
If the user deletes a file from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039;, it will remain on the tape silo system for 30 days, after which it will be deleted and cannot be recovered.   If a user, within the 30 day window finds it necessary to recover a file, the user must expeditiously submit a request to [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu].&lt;br /&gt;
&lt;br /&gt;
Less frequently accessed files are automatically transferred to the HPC Center robotic tape system, freeing up space in the disk storage pool and making it available for more actively used files. The selection criteria for the migration are age and size of a file. If a file is not accessed for 90 days, it may be moved to a tape in the tape library – in fact to two tapes, for backup. This is fully transparent to the user. When a file is needed, the system will copy the file back to the appropriate disk directory. No user action is required.&lt;br /&gt;
&lt;br /&gt;
==Data retention and account expiration policy (IN REVIEW)==&lt;br /&gt;
&lt;br /&gt;
Project directories on SR1 are retained as long as the project is active.  The HPC Center will coordinate with the Principal Investigator of the project before deleting a project directory.  If the PI is no longer with CUNY, the HPC Center will coordinate with the PI’s departmental chair or Research Dean, whichever is appropriate.&lt;br /&gt;
&lt;br /&gt;
For user accounts, current user directories under /global/u are retained as long as the account is active.  If a user account is inactive for one year, the HPC Center will attempt to contact the user and request that the data be removed from the system.  If there is no response from the user within three months of the initial notice, or if the user cannot be reached, the user directory will be purged. &lt;br /&gt;
&lt;br /&gt;
==DSMS Technical Summary (IN REVIEW)==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!File Space&lt;br /&gt;
!Purpose&lt;br /&gt;
!Accessibility&lt;br /&gt;
!Quota&lt;br /&gt;
!Backups&lt;br /&gt;
!Purges&lt;br /&gt;
|-&lt;br /&gt;
|Scratch:&lt;br /&gt;
/scratch/&amp;lt;userid&amp;gt;&lt;br /&gt;
on *PENZIAS, ANDY, SALK, BOB*&lt;br /&gt;
|High Performance Parallel scratch filesystems. Work area for jobs, datasets, restart files, files to be pre-/post processed. Temporary space for data that will be removed within a short amount of time.&lt;br /&gt;
|Not globally accessible.&lt;br /&gt;
Separate /scratch/&amp;lt;userid&amp;gt; exists on each system. Visible on login and compute nodes of each system and on the data transfer nodes.&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|Files older than 2 weeks are automatically deleted &lt;br /&gt;
OR&lt;br /&gt;
when scratch filesystem reaches 70% utilization&lt;br /&gt;
|-&lt;br /&gt;
|Home:&lt;br /&gt;
/global/u/&amp;lt;userid&amp;gt;&lt;br /&gt;
|User home filespace. Essential data should be stored here, such as user&#039;s source code, documents, and data structures.&lt;br /&gt;
|Globally accessible on the login and on the data transfer nodes through native GPFS or NFS mounts&lt;br /&gt;
|Nominally 50 GB&lt;br /&gt;
|Yes, backed up nightly to tape. If the active copy is deleted, the most recent backup is stored for 30 days.&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Not purged&lt;br /&gt;
|-&lt;br /&gt;
|Project:&lt;br /&gt;
/SR1/&amp;lt;PID&amp;gt;&lt;br /&gt;
|Project space allocations&lt;br /&gt;
|Accessible on the login and on the data transfer nodes. Accessible outside CUNY HPC Center through iRODS.&lt;br /&gt;
|Allocated according to project needs&lt;br /&gt;
|Yes, backed up nightly to tape. If the active copy is deleted, the most recent backup is stored for 30 days and retrievable on request, but the iRODS metadata may be lost.&lt;br /&gt;
|}&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039; is tuned for high bandwidth, redundancy, and resilience.  It is not optimal for handling large quantities of small files. If you need to archive more than a thousand of files on &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039;, please create a single archive using &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar&amp;lt;/font&amp;gt;&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
•	A separate &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; exists on each system.  On PENZIAS, SALK, KARLE, and ANDY, this is a Lustre parallel file system, on HERBERT it is NFS. These &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; directories are visible on the login and compute nodes of the system only and on the data transfer nodes, but are not shared across HPC systems.&lt;br /&gt;
&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; is used as a high performance parallel scratch filesystem, for example, temporary files (e.g. restart files) should be stored here.&lt;br /&gt;
&lt;br /&gt;
•	There are no quotas on &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;, however any files older than 2 weeks are automatically deleted.  Also, a cleanup script is scheduled to run every two weeks or whenever the /scratch disk space utilization exceeds 70%.  Dot-files are generally left intact from these cleanup jobs.&lt;br /&gt;
&lt;br /&gt;
•	/scratch space is available to all users. If the scratch space is exhausted, jobs will not be able to run. Purge any files in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;, which are no longer needed, even before the automatic deletion kicks in.&lt;br /&gt;
&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; directory may be empty when you login, you will need to copy any files required for submitting your jobs (submission scripts, data sets) from &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;/global/u&#039;&#039;&#039;&amp;lt;/font&amp;gt; or from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039;.  Once your jobs complete copy any files you need to keep back to &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; or &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039; and remove all files from /scratch.&lt;br /&gt;
&lt;br /&gt;
•	Do not use &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/tmp&amp;lt;/font&amp;gt;&#039;&#039;&#039; for storing temporary files. The file system where &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/tmp&amp;lt;/font&amp;gt;&#039;&#039;&#039; resides in memory is very small and slow. Files will be regularly deleted by automatic procedures.&lt;br /&gt;
&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; is not backed up and there is no provision for retaining data stored in these directories.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Data Handling Practices ==&lt;br /&gt;
===HPFS, i.e., /global/u  ===&lt;br /&gt;
&lt;br /&gt;
•	The HPFS is not an archive for non-HPC users. It is an archive for users who are processing data at the HPC Center.  “Parking” files on the &#039;&#039;&#039;HPFS&#039;&#039;&#039; as a back-up to local data stores is prohibited.  &lt;br /&gt;
&lt;br /&gt;
•	Do not store more than 1,000 files in a single directory. Store collections of small files into an archive (for example, tar). Note that for every file, a stub of about 4MB is kept on disk even if the rest of the file is migrated to tape, meaning that even migrated files take up some disk space. It also means that files smaller than the stub size are never migrated to tape because that would not make sense.  Storing a large number of small files in a single directory degrades the file system performance. &lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===/scratch===&lt;br /&gt;
&lt;br /&gt;
•	Please regularly remove unwanted files and directories and avoid keeping duplicate copies in multiple locations. File transfer among the HPC Center systems is very fast. It is forbidden to use &amp;quot;touch jobs&amp;quot; to prevent the cleaning policy from automatically deleting your files from the &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; directories. Use &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xmvf&amp;lt;/font&amp;gt;&#039;&#039;&#039;, not &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xvf&amp;lt;/font&amp;gt;&#039;&#039;&#039; to unpack files.   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xmvf&amp;lt;/font&amp;gt;&#039;&#039;&#039; updates the times stamp on the unpacked files.  The &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xvf&amp;lt;/font&amp;gt;&#039;&#039;&#039; command preserves the time stamp from the original file and not the time when the archive was unpacked. Consequently, the automatic deletion mechanism may remove files unpacked by &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar –xvf&amp;lt;/font&amp;gt;&#039;&#039;&#039;, which are only a few days old.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Data_Storage_and_Management_System&amp;diff=974</id>
		<title>Data Storage and Management System</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Data_Storage_and_Management_System&amp;diff=974"/>
		<updated>2026-03-13T00:25:45Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* &amp;quot;Home&amp;quot; directories are on /global/u */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This file system is resides on Hybrid Parallel File System (HPFS).  &lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
==&amp;quot;Home&amp;quot; directories are on &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; is a standard parallel high performance Linux file system based on HPE parallel file system (PFFS)  It holds the home directories of individual users. When users request and are granted an allocation of HPC resources, they are assigned a &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; and a 100 GB allocation of disk space for home directories on &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;. These &#039;&#039;&#039;home&#039;&#039;&#039; directories are on global file system not on local disks at nodes. That means that 1. data can be accesses only from  PFFS and 2. no local storage on nodes is available.  The exception are few nodes in CONDO tier that are designed to have local &amp;quot;buffer&amp;quot; space. All home directories are backed up on weekly basis.&lt;br /&gt;
&lt;br /&gt;
==&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;==&lt;br /&gt;
/Scratch is fast file system from which all jobs start. There is no quota on scratch, but &#039;&#039;&#039;scratch&#039;&#039;&#039; files are temporary and are &#039;&#039;&#039;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;not backed up&amp;lt;/font color&amp;gt;&#039;&#039;&#039;.  &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; is used by jobs queued for or in execution.  Output from jobs may ONLY temporarily be located in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch (up to 30 days).&amp;lt;/font&amp;gt;&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
In order to submit a job for execution, a user must &#039;&#039;&#039;stage&#039;&#039;&#039; or &#039;&#039;&#039;mount&#039;&#039;&#039; the files required by the job to &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; using UNIX commands and/or from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039; using &#039;&#039;&#039;iRODS&#039;&#039;&#039; commands. The later is moaning data directly from project space. &lt;br /&gt;
&lt;br /&gt;
Files in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; on a system are &#039;&#039;&#039;automatically relocated to local archive&#039;&#039;&#039; when  file(s)&#039;  residence on scratch exceeds 90 days. Active file are not relocated. Thus user must not store valuable data or compiled codes on scratch since the latter are static type of files. Upon move the user get warning from HPCC and must either move files to his/her home directory. Files in temporary archive left for 45 days are removed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==“Project” directories==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;“Project”&#039;&#039;&#039; directories are managed through &#039;&#039;&#039;iRODS&#039;&#039;&#039; and accessible through iRODS commands, not standard UNIX commands.   In iRODS terminology, a “collection” is the equivalent of “directory”. &lt;br /&gt;
&lt;br /&gt;
A &#039;&#039;&#039;“Project”&#039;&#039;&#039; is an activity that usually involves multiple users and/or many individual data files.  A &#039;&#039;&#039;“Project”&#039;&#039;&#039; is normally led by a “Principal Investigator” (PI), who is a faculty member or a research scientist.   The PI is the individual responsible to the University or a granting agency for the “Project”.  The PI has overall responsibility for “Project” data and “Project” data management. To establish a Project, the PI completes and submits the online “Project Application Form”. Project data are stored on project space on main file system. Valuable parts of the projects must be curated by PI&#039;s and stored in HPCC archive.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Typical Workflow==&lt;br /&gt;
Typical workflows for Penzias Appel and Karle in are described below:&lt;br /&gt;
&lt;br /&gt;
1. Copying files from a user’s home directory or from &#039;&#039;&#039;SR1&#039;&#039;&#039; to &#039;&#039;&#039;SCRATCH&#039;&#039;&#039;.&amp;lt;br /&amp;gt;&lt;br /&gt;
If working with &#039;&#039;&#039;HOME&#039;&#039;&#039;:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   cd /scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
   mkdir &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt; &amp;amp;&amp;amp; cd &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt;&lt;br /&gt;
   cp /global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font&amp;gt;/a.out ./&lt;br /&gt;
   cp /global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mydatafile &amp;lt;/font&amp;gt;./&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If working with &#039;&#039;&#039;SR1 (storage repository)&#039;&#039;&#039;:&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
   cd /scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
   mkdir &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt; &amp;amp;&amp;amp; cd &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt;&lt;br /&gt;
   iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/a.out &lt;br /&gt;
   iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mydatafile&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Prepare SLURM job script. Typical SLURM sript is similar to the following:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   #!/bin/bash &lt;br /&gt;
   #SBATCH --partition production &lt;br /&gt;
   #SBATCH -J test &lt;br /&gt;
   #SBATCH --nodes 1 &lt;br /&gt;
   #SBATCH --ntasks 8 &lt;br /&gt;
   #SBATCH --mem 4000&lt;br /&gt;
   echo &amp;quot;Starting…&amp;quot; &lt;br /&gt;
&lt;br /&gt;
   cd $SLURM_SUBMIT_DIR&lt;br /&gt;
   mpirun -np 4 ./a.out ./&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mydatafile&amp;lt;/font color&amp;gt; &amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myoutputs&amp;lt;/font color&amp;gt;&lt;br /&gt;
   echo &amp;quot;Done…&amp;quot;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
Your SLURM may be different depending on your needs. Read section Submitting Jobs for a reference.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Run the job &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   sbatch ./&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_script&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Once job is finished, clean up &#039;&#039;&#039;SCRATCH&#039;&#039;&#039; and store outputs in your user home directory or in &#039;&#039;&#039;SR1&#039;&#039;&#039;.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If working with &#039;&#039;&#039;HOME&#039;&#039;&#039;:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   mv ./myoutputs /global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/.&lt;br /&gt;
   cd ../&lt;br /&gt;
   rm -rf &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;mySLURM_Job&amp;lt;/font color&amp;gt; &amp;lt;/font&amp;gt;&lt;br /&gt;
== iRODS (The iRODS Section is in REVIEW and may not be CURRENT) ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;iRODS&#039;&#039;&#039; is the integrated Rule-Oriented Data-management System, a&lt;br /&gt;
community-driven, open source, data grid software solution. &#039;&#039;&#039;iRODS&#039;&#039;&#039; is&lt;br /&gt;
designed to abstract data services from data storage hardware and&lt;br /&gt;
provide users with hardware-agnostic way to manipulate data. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;iRODS&#039;&#039;&#039; is a primary tool that is used by the CUNY HPCC users to&lt;br /&gt;
seamlessly access 1PB storage resource (further referenced as &#039;&#039;&#039;SR1&#039;&#039;&#039;&lt;br /&gt;
here) from any of the HPCC&#039;s computational systems.&lt;br /&gt;
&lt;br /&gt;
Access to &#039;&#039;&#039;SR1&#039;&#039;&#039; is provided via so-called &#039;&#039;&#039;i-commands&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;iinit&lt;br /&gt;
ils&lt;br /&gt;
imv&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comprehesive list of i-commands with detailed description can be&lt;br /&gt;
obtained at [https://wiki.irods.org/index.php/icommands iRODS wiki].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To obtain quick help on any of the commands while being logged into&lt;br /&gt;
any of the HPCC&#039;s machines type &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;i-command -h&#039;&#039;&#039;&amp;lt;/font&amp;gt;. For example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ils -h&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
Following is the list of some of the most relevant &#039;&#039;&#039;i-commands&#039;&#039;&#039;:&lt;br /&gt;
  &lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iinit&amp;lt;/font&amp;gt;&#039;&#039;&#039; -- Initialize session and store your password in a scrambled form for automatic use by other icommands.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iput&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Store a file&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iget&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Get a file&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;imkdir&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like mkdir, make an iRODS collection (similar to a directory or Windows folder)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ichmod&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like chmod, allow (or later restrict) access to your data objects by other users.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;icp&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like cp or rcp, copy an iRODS data object&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;irm&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like rm, remove an iRODS data object&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ils&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like ls, list iRODS data objects (files) and collections (directories)&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ipwd&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like pwd, print the iRODS current working directory&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;icd&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Like cd, change the iRODS current working directory&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;ichksum&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Checksum one or more data-object or collection from iRODS space.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;imv&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Moves/renames an irods data-object or collection.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;irmtrash&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Remove one or more data-object or collection from a RODS trash bin.&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;imeta&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Add, remove, list, or query user-defined Attribute-Value-Unit triplets metadata&lt;br /&gt;
:&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;iquest&amp;lt;/font&amp;gt;&#039;&#039;&#039; --  Query (pose a question to) the ICAT, via a SQL-like interface&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Before using any of the i-commands users need to identify themselves to the iRODS server running command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# iinit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and providing HPCC&#039;s password. &lt;br /&gt;
&lt;br /&gt;
Typical workflow that involves operations on files stored in SR1&lt;br /&gt;
include storing/getting data to and from SR1, tagging data with &lt;br /&gt;
metadata, searching for data, sharing (setting permissions). &lt;br /&gt;
&lt;br /&gt;
==== Storing data to SR1 ====&lt;br /&gt;
 &lt;br /&gt;
1. Create &#039;&#039;&#039;iRODS&#039;&#039;&#039; directory (aka &#039;collection&#039;):&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   # imkdir &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
2. Store all files &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;&#039;myfile*&#039;&#039;&#039;&#039;&amp;lt;/font face&amp;gt; into this directory (collection):&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   # iput -r &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt; myfile* myProject&amp;lt;/font color&amp;gt;/.&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
3. Verify that files are stored:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&lt;br /&gt;
   # ils&lt;br /&gt;
   /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;:&lt;br /&gt;
   C- /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
   # ils &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
   /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;:&lt;br /&gt;
      myfile1&lt;br /&gt;
      myfile2&lt;br /&gt;
      myfile3&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
Symbol &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;C-&#039;&amp;lt;/font&amp;gt;&#039;&#039;&#039; in the beginning of output of &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;ils&#039;&amp;lt;/font&amp;gt;&#039;&#039;&#039; shows that listed item is a collection.&lt;br /&gt;
&lt;br /&gt;
4. Combining &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;ils&#039;, &#039;imkdir&#039;, &#039;iput&#039;, &#039;icp&#039;, &#039;ipwd&#039;, &#039;imv&#039;&#039;&#039;&#039;&amp;lt;/font&amp;gt; user can create iRODS directories and store files in them similarly to what is normally done with UNIX commands &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;ls&#039;, &#039;mkdir&#039;, &#039;cp&#039;, &#039;pwd&#039;, &#039;mv&#039;&#039;&#039;&#039;&amp;lt;/font&amp;gt; etc...&lt;br /&gt;
&lt;br /&gt;
==== Getting data from SR1 ====&lt;br /&gt;
&lt;br /&gt;
1. To copy file from SR1 to current working directory run&lt;br /&gt;
   # iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myfile1&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Now listing current working directory should reveal &#039;&#039;&#039;myfile1&#039;&#039;&#039;:&lt;br /&gt;
   # ls&lt;br /&gt;
   &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myfile1&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Instead of individual files the whole directory (with&lt;br /&gt;
sub-directories) can be copied with &#039;&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;-r&amp;lt;/font&amp;gt;&#039;&#039;&#039;&#039; flag (stands for&lt;br /&gt;
&#039;recursive&#039;)&lt;br /&gt;
   # iget -r &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NOTE: wildcards are not supported, therefore the command below &amp;lt;u&amp;gt;will not work&amp;lt;/u&amp;gt;&lt;br /&gt;
   # iget &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myProject&amp;lt;/font color&amp;gt;/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;myfile&amp;lt;/font color&amp;gt;*&lt;br /&gt;
&lt;br /&gt;
=== Tagging data with metadata ===&lt;br /&gt;
   &lt;br /&gt;
iRODS provides users with extremely powerful mechanism of managing&lt;br /&gt;
data with metadata. While working with large datasets it&#039;s&lt;br /&gt;
sometimes easy to forget what is stored in this or the other file.&lt;br /&gt;
Metadata tags help organizing data in a very easy and reliable&lt;br /&gt;
manner.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s tag files from previous example with some metadata:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta add -d myProject/myfile1 zvalue 15 meters&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile1 colorLabel RED&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile1 comment &amp;quot;This is file number 1&amp;quot;&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile2 zvalue 10 meters&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile2 colorLabel RED&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile2 comment &amp;quot;This is file number 2&amp;quot;&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile3 zvalue 15 meters&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile3 colorLabel BLUE&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
# imeta add -d myProject/myfile3 comment &amp;quot;This is file number 3&amp;quot;&lt;br /&gt;
AVU added to 1 data-objects&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here we&#039;ve tagged myfile1 with 3 metadata labels:&lt;br /&gt;
&lt;br /&gt;
- zvalue 10 meters&lt;br /&gt;
&lt;br /&gt;
- colorlabel RED&lt;br /&gt;
&lt;br /&gt;
- comment &amp;quot;This is file number 1&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
Similar tags were added to &#039;myfile2&#039; and &#039;myfile3&#039;&lt;br /&gt;
&lt;br /&gt;
Metadata come in form of AVU -- Attribute|Value|Unit. As seen from&lt;br /&gt;
the above examples Unit is not necessary. &lt;br /&gt;
&lt;br /&gt;
Let&#039;s list all metadata assigned to file &#039;myfie1&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta ls -d myProject/myfile1&lt;br /&gt;
AVUs defined for dataObj myProject/myfile1:&lt;br /&gt;
attribute: zvalue&lt;br /&gt;
value: 15&lt;br /&gt;
units: meters&lt;br /&gt;
----&lt;br /&gt;
attribute: colorLabel&lt;br /&gt;
value: RED&lt;br /&gt;
units:&lt;br /&gt;
----&lt;br /&gt;
attribute: comment&lt;br /&gt;
value: This is file number 1&lt;br /&gt;
units:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
To remove an AVU assigned to a file run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta rm -d myProject/myfile1 zvalue 15 meters&lt;br /&gt;
# imeta ls -d myProject/myfile1&lt;br /&gt;
AVUs defined for dataObj myProject/myfile1:&lt;br /&gt;
attribute: colorLabel&lt;br /&gt;
value: RED&lt;br /&gt;
units:&lt;br /&gt;
----&lt;br /&gt;
attribute: comment&lt;br /&gt;
value: This is file number 1&lt;br /&gt;
units:&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# imeta add -d myProject/myfile1 zvalue 15 meters&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Metadata may be assigned to directories as well:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# imeta add -C myProject simulationsPool 1&lt;br /&gt;
# imeta ls -C myProject&lt;br /&gt;
AVUs defined for collection myProject:&lt;br /&gt;
attribute: simulationsPool&lt;br /&gt;
value: 1&lt;br /&gt;
units:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the &#039;-C&#039; key that is used instead of &#039;-d&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Searching for data ===&lt;br /&gt;
&lt;br /&gt;
Power of metadata becomes obvious when data needs to be found in&lt;br /&gt;
large collections. Here is an illustration of how easy this task is&lt;br /&gt;
done with iRODS via imeta queries:&lt;br /&gt;
&lt;br /&gt;
 # imeta qu -d zvalue = 15&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject&lt;br /&gt;
 dataObj: myfile1&lt;br /&gt;
 ----&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject&lt;br /&gt;
 dataObj: myfile3&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We see both files that were tagged with label &#039;zvalue 10 meters&#039;.&lt;br /&gt;
Here is different query:&lt;br /&gt;
 &lt;br /&gt;
 # imeta qu -d colorLabel = RED&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/myProject&lt;br /&gt;
 dataObj: myfile1&lt;br /&gt;
 ----&lt;br /&gt;
 collection: /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject&lt;br /&gt;
 dataObj: myfile2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another powerful mechanism to query data is provided with &#039;iquest&#039;. &lt;br /&gt;
Following is a number of examples that show &#039;iquest&#039; capabilities:&lt;br /&gt;
 &lt;br /&gt;
 iquest &amp;quot;SELECT DATA_NAME, DATA_SIZE WHERE DATA_RESC_NAME like &#039;cuny%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;For %-12.12s size is %s&amp;quot; &amp;quot;SELECT DATA_NAME ,  DATA_SIZE  WHERE COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;SELECT COLL_NAME WHERE COLL_NAME like &#039;/cunyZone/home/%&#039; AND USER_NAME like &#039;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;User %-6.6s has %-5.5s access to file %s&amp;quot; &amp;quot;SELECT USER_NAME,  DATA_ACCESS_NAME, DATA_NAME WHERE COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot; %-5.5s access has been given to user %-6.6s for the file %s&amp;quot; &amp;quot;SELECT DATA_ACCESS_NAME, USER_NAME, DATA_NAME WHERE COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest no-distinct &amp;quot;select META_DATA_ATTR_NAME&amp;quot;&lt;br /&gt;
 iquest  &amp;quot;select COLL_NAME, DATA_NAME WHERE DATA_NAME like &#039;myfile%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;User %-9.9s uses %14.14s bytes in %8.8s files in &#039;%s&#039;&amp;quot; &amp;quot;SELECT USER_NAME, sum(DATA_SIZE),count(DATA_NAME),RESC_NAME&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select sum(DATA_SIZE) where COLL_NAME = &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select sum(DATA_SIZE) where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select sum(DATA_SIZE), RESC_NAME where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select order_desc(DATA_ID) where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select count(DATA_ID) where COLL_NAME like &#039;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;%&#039;&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select RESC_NAME where RESC_CLASS_NAME IN (&#039;bundle&#039;,&#039;archive&#039;)&amp;quot;&lt;br /&gt;
 iquest &amp;quot;select DATA_NAME,DATA_SIZE where DATA_SIZE BETWEEN &#039;100000&#039; &#039;100200&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Sharing data ===&lt;br /&gt;
&lt;br /&gt;
Access to the data can be controlled via &#039;ichmod&#039; command. It&#039;s&lt;br /&gt;
behavior is similar to UNIX &#039;chmod&#039; command. For example if there is a&lt;br /&gt;
need to provide user &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&#039;&#039;&#039;&amp;lt;userid&amp;gt;&#039;&#039;&#039;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt; with read access to file&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;myProject/myfile1&#039;&#039;&#039;&amp;lt;/font&amp;gt; execute the following command:&lt;br /&gt;
   ichmod read &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid1&amp;gt;&amp;lt;/font color&amp;gt; myProject/myfile1&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see who has access to a file/directory use:&lt;br /&gt;
   # ils -A myProject/myfile1&lt;br /&gt;
   /cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;/myProject/myfile1&lt;br /&gt;
   ACL - &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid1&amp;gt;&amp;lt;/font color&amp;gt;&lt;br /&gt;
   #cunyZone:read object   &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;#cunyZone:own&lt;br /&gt;
&lt;br /&gt;
In the above example user &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid1&amp;gt;&amp;lt;/font color&amp;gt;&#039;&#039;&#039;&amp;lt;/font&amp;gt; has read access to the file and&lt;br /&gt;
user &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; is an owner of the file. &lt;br /&gt;
&lt;br /&gt;
Possible levels of access to a data object are null/read/write/own.&lt;br /&gt;
==Backups (IN REVIEW)==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Backups.&#039;&#039;&#039;	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; user directories. Project files are backed up automatically to a remote tape silo system over a fiber optic network.  Backups are performed daily. &lt;br /&gt;
&lt;br /&gt;
If the user deletes a file from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039;, it will remain on the tape silo system for 30 days, after which it will be deleted and cannot be recovered.   If a user, within the 30 day window finds it necessary to recover a file, the user must expeditiously submit a request to [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu].&lt;br /&gt;
&lt;br /&gt;
Less frequently accessed files are automatically transferred to the HPC Center robotic tape system, freeing up space in the disk storage pool and making it available for more actively used files. The selection criteria for the migration are age and size of a file. If a file is not accessed for 90 days, it may be moved to a tape in the tape library – in fact to two tapes, for backup. This is fully transparent to the user. When a file is needed, the system will copy the file back to the appropriate disk directory. No user action is required.&lt;br /&gt;
&lt;br /&gt;
==Data retention and account expiration policy (IN REVIEW)==&lt;br /&gt;
&lt;br /&gt;
Project directories on SR1 are retained as long as the project is active.  The HPC Center will coordinate with the Principal Investigator of the project before deleting a project directory.  If the PI is no longer with CUNY, the HPC Center will coordinate with the PI’s departmental chair or Research Dean, whichever is appropriate.&lt;br /&gt;
&lt;br /&gt;
For user accounts, current user directories under /global/u are retained as long as the account is active.  If a user account is inactive for one year, the HPC Center will attempt to contact the user and request that the data be removed from the system.  If there is no response from the user within three months of the initial notice, or if the user cannot be reached, the user directory will be purged. &lt;br /&gt;
&lt;br /&gt;
==DSMS Technical Summary (IN REVIEW)==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!File Space&lt;br /&gt;
!Purpose&lt;br /&gt;
!Accessibility&lt;br /&gt;
!Quota&lt;br /&gt;
!Backups&lt;br /&gt;
!Purges&lt;br /&gt;
|-&lt;br /&gt;
|Scratch:&lt;br /&gt;
/scratch/&amp;lt;userid&amp;gt;&lt;br /&gt;
on *PENZIAS, ANDY, SALK, BOB*&lt;br /&gt;
|High Performance Parallel scratch filesystems. Work area for jobs, datasets, restart files, files to be pre-/post processed. Temporary space for data that will be removed within a short amount of time.&lt;br /&gt;
|Not globally accessible.&lt;br /&gt;
Separate /scratch/&amp;lt;userid&amp;gt; exists on each system. Visible on login and compute nodes of each system and on the data transfer nodes.&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|Files older than 2 weeks are automatically deleted &lt;br /&gt;
OR&lt;br /&gt;
when scratch filesystem reaches 70% utilization&lt;br /&gt;
|-&lt;br /&gt;
|Home:&lt;br /&gt;
/global/u/&amp;lt;userid&amp;gt;&lt;br /&gt;
|User home filespace. Essential data should be stored here, such as user&#039;s source code, documents, and data structures.&lt;br /&gt;
|Globally accessible on the login and on the data transfer nodes through native GPFS or NFS mounts&lt;br /&gt;
|Nominally 50 GB&lt;br /&gt;
|Yes, backed up nightly to tape. If the active copy is deleted, the most recent backup is stored for 30 days.&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Not purged&lt;br /&gt;
|-&lt;br /&gt;
|Project:&lt;br /&gt;
/SR1/&amp;lt;PID&amp;gt;&lt;br /&gt;
|Project space allocations&lt;br /&gt;
|Accessible on the login and on the data transfer nodes. Accessible outside CUNY HPC Center through iRODS.&lt;br /&gt;
|Allocated according to project needs&lt;br /&gt;
|Yes, backed up nightly to tape. If the active copy is deleted, the most recent backup is stored for 30 days and retrievable on request, but the iRODS metadata may be lost.&lt;br /&gt;
|}&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039; is tuned for high bandwidth, redundancy, and resilience.  It is not optimal for handling large quantities of small files. If you need to archive more than a thousand of files on &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039;, please create a single archive using &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar&amp;lt;/font&amp;gt;&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
•	A separate &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; exists on each system.  On PENZIAS, SALK, KARLE, and ANDY, this is a Lustre parallel file system, on HERBERT it is NFS. These &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; directories are visible on the login and compute nodes of the system only and on the data transfer nodes, but are not shared across HPC systems.&lt;br /&gt;
&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; is used as a high performance parallel scratch filesystem, for example, temporary files (e.g. restart files) should be stored here.&lt;br /&gt;
&lt;br /&gt;
•	There are no quotas on &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;, however any files older than 2 weeks are automatically deleted.  Also, a cleanup script is scheduled to run every two weeks or whenever the /scratch disk space utilization exceeds 70%.  Dot-files are generally left intact from these cleanup jobs.&lt;br /&gt;
&lt;br /&gt;
•	/scratch space is available to all users. If the scratch space is exhausted, jobs will not be able to run. Purge any files in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;, which are no longer needed, even before the automatic deletion kicks in.&lt;br /&gt;
&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; directory may be empty when you login, you will need to copy any files required for submitting your jobs (submission scripts, data sets) from &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;&#039;&#039;&#039;/global/u&#039;&#039;&#039;&amp;lt;/font&amp;gt; or from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039;.  Once your jobs complete copy any files you need to keep back to &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u&amp;lt;/font&amp;gt;&#039;&#039;&#039; or &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;SR1&amp;lt;/font&amp;gt;&#039;&#039;&#039; and remove all files from /scratch.&lt;br /&gt;
&lt;br /&gt;
•	Do not use &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/tmp&amp;lt;/font&amp;gt;&#039;&#039;&#039; for storing temporary files. The file system where &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/tmp&amp;lt;/font&amp;gt;&#039;&#039;&#039; resides in memory is very small and slow. Files will be regularly deleted by automatic procedures.&lt;br /&gt;
&lt;br /&gt;
•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; is not backed up and there is no provision for retaining data stored in these directories.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Data Handling Practices ==&lt;br /&gt;
===HPFS, i.e., /global/u  ===&lt;br /&gt;
&lt;br /&gt;
•	The HPFS is not an archive for non-HPC users. It is an archive for users who are processing data at the HPC Center.  “Parking” files on the &#039;&#039;&#039;HPFS&#039;&#039;&#039; as a back-up to local data stores is prohibited.  &lt;br /&gt;
&lt;br /&gt;
•	Do not store more than 1,000 files in a single directory. Store collections of small files into an archive (for example, tar). Note that for every file, a stub of about 4MB is kept on disk even if the rest of the file is migrated to tape, meaning that even migrated files take up some disk space. It also means that files smaller than the stub size are never migrated to tape because that would not make sense.  Storing a large number of small files in a single directory degrades the file system performance. &lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===/scratch===&lt;br /&gt;
&lt;br /&gt;
•	Please regularly remove unwanted files and directories and avoid keeping duplicate copies in multiple locations. File transfer among the HPC Center systems is very fast. It is forbidden to use &amp;quot;touch jobs&amp;quot; to prevent the cleaning policy from automatically deleting your files from the &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch&amp;lt;/font&amp;gt;&#039;&#039;&#039; directories. Use &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xmvf&amp;lt;/font&amp;gt;&#039;&#039;&#039;, not &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xvf&amp;lt;/font&amp;gt;&#039;&#039;&#039; to unpack files.   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xmvf&amp;lt;/font&amp;gt;&#039;&#039;&#039; updates the times stamp on the unpacked files.  The &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar -xvf&amp;lt;/font&amp;gt;&#039;&#039;&#039; command preserves the time stamp from the original file and not the time when the archive was unpacked. Consequently, the automatic deletion mechanism may remove files unpacked by &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;tar –xvf&amp;lt;/font&amp;gt;&#039;&#039;&#039;, which are only a few days old.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=973</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=973"/>
		<updated>2026-03-09T17:16:48Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Empire AI and CUNY-HPCC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended &amp;quot;Alpha&amp;quot; server as well new &amp;quot;Beta&amp;quot; computer.&#039;&#039;&#039; The latter will consist of 288 B200 GPU and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit (SU),&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section HPCC access plans for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=972</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=972"/>
		<updated>2026-03-09T17:12:37Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Empire AI and CUNY-HPCC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.&#039;&#039;&#039;  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended Alpha server as well new Beta. The latter will consist of &#039;&#039;&#039;288 B200 GPU&#039;&#039;&#039; and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit,&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  See section &#039;&#039;&#039;HPCC access plans&#039;&#039;&#039; for further details.    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=971</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=971"/>
		<updated>2026-03-09T17:12:07Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Empire AI and CUNY-HPCC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.&#039;&#039;&#039;  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended Alpha server as well new Beta. The latter will consist of &#039;&#039;&#039;288 B200 GPU&#039;&#039;&#039; and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit,&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).  See section HPCC access plans for further details.&#039;&#039;&#039;    &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=970</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=970"/>
		<updated>2026-03-09T17:11:28Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Empire AI and CUNY-HPCC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.&#039;&#039;&#039;  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended Alpha server as well new Beta. The latter will consist of &#039;&#039;&#039;288 B200 GPU&#039;&#039;&#039; and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit,&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs for public servers are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).  See section&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=969</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=969"/>
		<updated>2026-03-09T17:09:05Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Empire AI and CUNY-HPCC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.&#039;&#039;&#039;  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper) similar to EAI, including extended Alpha server as well new Beta. The latter will consist of &#039;&#039;&#039;288 B200 GPU&#039;&#039;&#039; and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit,&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=968</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=968"/>
		<updated>2026-03-09T17:06:53Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Empire AI and CUNY-HPCC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.&#039;&#039;&#039;  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper GPU) similar to EAI, including extended Alpha server as well new Beta. The latter will consist of &#039;&#039;&#039;288 B200 GPU&#039;&#039;&#039; and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50/unit,&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate.  One SU corresponds to one hour of H100 compute, and one hour of B200 compute corresponds to two SU.  In comparison the CUNY-HPCC recovery costs are &#039;&#039;&#039;$0.015 per CPU hour (1 unit) and $0.09 per GPU hour (6 units).&#039;&#039;&#039;  &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=967</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=967"/>
		<updated>2026-03-09T16:56:15Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* CUNY-HPCC services */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.&#039;&#039;&#039;  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper GPU) similar to EAI, including extended Alpha server as well new Beta. The latter will consist of &#039;&#039;&#039;288 B200 GPU&#039;&#039;&#039; and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50,&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate. In comparison the CUNY-HPCC recovery costs are &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
:*Maintains tickets for all CUNY users with allocation on EAI.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=966</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=966"/>
		<updated>2026-03-09T16:55:20Z</updated>

		<summary type="html">&lt;p&gt;Alex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== Empire AI and CUNY-HPCC ==&lt;br /&gt;
The Empire AI consortium includes the &#039;&#039;&#039;CUNY Graduate Center&#039;&#039;&#039;, Columbia University, Cornell University, Icahn School of Medicine, New York University, Rochester Institute of Technology, Rensselaer Polytechnic Institute, the State University of New York, University at Buffalo, and University of Rochester.  &#039;&#039;&#039;CUNY-HPCC provides supports and maintains tickets for all CUNY users with allocation on EAI.&#039;&#039;&#039;  In addition CUNY-HPCC is a stepping stone for CUNY researchers since it operates (on smaller scale) architectures (Including nodes with Hopper GPU) similar to EAI, including extended Alpha server as well new Beta. The latter will consist of &#039;&#039;&#039;288 B200 GPU&#039;&#039;&#039; and recently added RTX 6000 pro nodes. The expected cost for &#039;&#039;&#039;EAI is $0.50,&#039;&#039;&#039; which will provide CUNY PIs with a rate that is well below a typical AWS rate. In comparison the CUNY-HPCC recovery costs are &#039;&#039;&#039;$0.015 per CPU hour and $0.09 per GPU hour.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=965</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=965"/>
		<updated>2026-03-09T16:25:24Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* CUNY-HPCC services */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Supports research computing at CUNY - for faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPCC capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=964</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=964"/>
		<updated>2026-03-09T16:23:58Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* CUNY-HPCC services */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects. CUNY-HPCC:     &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs. &lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=963</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=963"/>
		<updated>2026-03-09T16:23:32Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Mission of CUNY-HPCC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward EAI advanced facilities.  &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects.    &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs. &lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=962</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=962"/>
		<updated>2026-03-09T16:21:38Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Mission of CUNY-HPCC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is a member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward advanced architectures at EAI.  &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects.    &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs. &lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=961</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=961"/>
		<updated>2026-03-09T16:21:15Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Mission of CUNY-HPCC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance education and boost scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward advanced architectures at EAI.  &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects.    &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs. &lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=960</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=960"/>
		<updated>2026-03-09T16:20:24Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Mission of CUNY-HPCC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research and education facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance advances scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward advanced architectures at EAI.  &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects.    &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs. &lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=959</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=959"/>
		<updated>2026-03-09T16:19:04Z</updated>

		<summary type="html">&lt;p&gt;Alex: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
== Mission of CUNY-HPCC ==&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is a core research facility for the University. It is located at campus of the college of College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. The core mission of CUNY-HPCC is to advance advances scientific research and discovery at the University by managing state-of-the-art computing infrastructure and by providing comprehensive research support services including domain-specific expertise in various aspects of computationally intensive research. CUNY is member of Empire AI (EAI) consortium, so CUNY-HPCC is a stepping stone for CUNY researchers toward advanced architectures at EAI.  &lt;br /&gt;
&lt;br /&gt;
== CUNY-HPCC services ==&lt;br /&gt;
CUNY-HPCC provides professionally maintained, modern computational environment and architectures, advanced storage and fast interconnects.    &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Provides state-of-the-art computing resources and comprehensive research support services including expertise and full support for users with allocation on EMPIRE-AI. &lt;br /&gt;
:*Creates opportunities for the CUNY research community to develop new partnerships with the government and private sectors.&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs. &lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=958</id>
		<title>Administrative Information</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=958"/>
		<updated>2026-03-05T19:25:23Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Accounts overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
==How to get an account==&lt;br /&gt;
&lt;br /&gt;
=== Definitions and procedures ===&lt;br /&gt;
CUNY-HPCC operates on cost recovery scheme which requires all accounts to be associated with research project(s) or to be class accounts. The research accounts are sponsored by Principle Investigator (PI). A &#039;&#039;&#039;Principal Investigator (PI) at CUNY is defined as the lead researcher responsible for the design, execution, and management of a research project, ensuring compliance with regulations and overseeing the project&#039;s financial aspects. PI is a &amp;lt;u&amp;gt;faculty member or a qualified researcher&amp;lt;/u&amp;gt; who has the authority to apply for research funding and manage the project.&#039;&#039;&#039;  The procedure to open an account is as follows: &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1.&#039;&#039;&#039;  Creation of sponsor account (PI account) - &#039;&#039;&#039;form A or B below.&#039;&#039;&#039; At that step the PI must create account for him/her and provide information about project title, funding and duration. The request of resources is not mandatory.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2.&#039;&#039;&#039;  Upon creating account the PI will get unique code which has to be shared with members of a group (students and post docs) that require account on HPCC.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3.&#039;&#039;&#039;  Members of research group (lab) and academic collaborators can apply for account at CUNY-HPCC by using form C,D, E or F It is mandatory to use a code mentioned in Step 2 (from CUNY PI) in these forms.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 4.&#039;&#039;&#039; PI should assign students to his project.   &lt;br /&gt;
&lt;br /&gt;
===Accounts overview===&lt;br /&gt;
All users of HPCC resources must register with HPCC for an account described in a table below. A user account is issued to an &#039;&#039;&#039;&#039;&#039;individual user&#039;&#039;&#039;.&#039;&#039; Accounts are &#039;&#039;&#039;not to be shared&#039;&#039;&#039;.  HPCC &#039;&#039;&#039;&amp;lt;u&amp;gt;will communicate only via CUNY e-mails. to users from groups A to E.&amp;lt;/u&amp;gt;&#039;&#039;&#039; HPCC will communicate with users from &#039;&#039;&#039;F,&#039;&#039;&#039; and &#039;&#039;&#039;G&#039;&#039;&#039; account types  via users&#039; verified work account, CC to the CUNY collaborator (for F only). In addition if resources are available and per discretion of CUNY-HPCC director, external to CUNY researchers can obtain external research account (type G) at CUNY-HPCC  by renting HPC resources and paying in advance the full cost recovery fee. Please contact the HPCC director for details. &lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!User accounts for:&lt;br /&gt;
!Type&lt;br /&gt;
!Conditions&lt;br /&gt;
!Renuwal cycles&lt;br /&gt;
!Conditions&lt;br /&gt;
!Mandatory conditions&lt;br /&gt;
|-&lt;br /&gt;
|Faculty, Research Staff&lt;br /&gt;
|A&lt;br /&gt;
|Renews every year at the beginning of Fall semester&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Adjunct Faculty &lt;br /&gt;
|B&lt;br /&gt;
|Renews every semester (Fall/Spring) &lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail.  &lt;br /&gt;
|-&lt;br /&gt;
|Doctoral Graduate Students&lt;br /&gt;
|C&lt;br /&gt;
|Renews every year the beginning of Fall semester&lt;br /&gt;
|14&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY E-mail, For &#039;&#039;&#039;PhD students the first E-mail is their GC E-mail address. Second mail is the college E-mail.&#039;&#039;&#039; &lt;br /&gt;
|-&lt;br /&gt;
|Master Students&lt;br /&gt;
|D&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.&lt;br /&gt;
|CUNY EID, Valid CUNY e-mail. First E-mail is the college E-mail address. &lt;br /&gt;
|-&lt;br /&gt;
|Undergraduate Students&lt;br /&gt;
|E&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 7 days from renewal date, data and home directory are removed after 15 days. No backup of any data. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail. First E-mail is the college E-mail address. &lt;br /&gt;
|-&lt;br /&gt;
|Academic Collaborators&lt;br /&gt;
|F&lt;br /&gt;
|Renews once a year (Fall) for the duration of a project&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|other institution EID, work e-mail and valid CUNY collaborator E-mail&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Public and Private Sector Partners&lt;br /&gt;
|G&lt;br /&gt;
|No Renewal. Good only for the duration of contract. &lt;br /&gt;
|NA&lt;br /&gt;
|Account expires at the date of expiring of the contract. &lt;br /&gt;
|state/federal ID, verified work e-mail.&#039;&#039;&#039;Advanced pay of full cost for rented resource.&#039;&#039;&#039;&lt;br /&gt;
|}  &lt;br /&gt;
&lt;br /&gt;
Users who missed renewal for less than 90 days should communicate with HPCC via e-mail to &#039;&#039;&#039;hpchelp@csi.cuny.edu&#039;&#039;&#039; for account recovery. All users must inform HPCC for changes it their academic status. It is mandatory to specify information (or NA) on all points from below list. Please do not forget to provide information about past and pending &#039;&#039;&#039;&amp;lt;u&amp;gt;publications&amp;lt;/u&amp;gt;&#039;&#039;&#039; and funded projects and &amp;lt;u&amp;gt;information about your local available resources (local servers and workstations/desktops only).&amp;lt;/u&amp;gt;  Think carefully about the resources needed and try to estimate as accurate as possible.  Note that &#039;&#039;&#039;by applying for and obtaining an account, the user agrees to the HPCC End User Policy (EUP) and  Mandatory Security Requirements for Access  (MSRA).&#039;&#039;&#039;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+Required Information for opening of  HPCC account. Please provide information in all fields and/or mark NA when needed. &lt;br /&gt;
!&lt;br /&gt;
! rowspan=&amp;quot;26&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;&amp;lt;big&amp;gt;For All CUNY Faculty, Staff And Graduate Students&amp;lt;/big&amp;gt;&#039;&#039;&#039;  &#039;&#039;&#039;&amp;lt;big&amp;gt;(A ,B,C,D)&amp;lt;/big&amp;gt;&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. John A. Smith  22341356 jsmith@csi.cuny.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|CUNY &#039;&#039;Academic status ( faculty, adjunct faculty, PhD student, MS student, research staff):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Primary&#039;&#039;&#039; Affiliation within CUNY - campus name and Department ( e.g Hunter College, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Secondary&#039;&#039;&#039; CUNY affiliation if any. Provide campus name and Department (e.g. Graduate Center, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Name, department and college affiliation of PI/Advisor (e.g. John Smith, Biology, Hunter College):&lt;br /&gt;
|-&lt;br /&gt;
|If out of College of Staten Island provide description of  local resources available. &lt;br /&gt;
:Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
::College (e.g. Hunter)&lt;br /&gt;
::Type of resource (e.g. Department cluster):&lt;br /&gt;
:::&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-&#039;&#039; &#039;&#039;type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed for the project:&lt;br /&gt;
::- CPU Cores (e.g 1000):&lt;br /&gt;
::- GPU options (e.g 2 x V100/16 GB):&lt;br /&gt;
::- V100/16 GB -&lt;br /&gt;
::- V100/32 GB -  &lt;br /&gt;
::- L40/48 GB - &lt;br /&gt;
::- A30/24 GB -&lt;br /&gt;
::- A40/24 GB - &lt;br /&gt;
::- A100/40 GB - &lt;br /&gt;
::- A100/80 GB - &lt;br /&gt;
::- Storage Space (above 50 GB)&lt;br /&gt;
::- Backup of data (Y/N):&lt;br /&gt;
::- Archive of data (Y/N):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Title of the project:&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Short Description of the  project (up to 100 words): &lt;br /&gt;
|-&lt;br /&gt;
|Funding sources of the project (e.g. NSF grant #, CUNY):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conference presentations, posters and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Number of refereed publications relevant to the project:&lt;br /&gt;
|-&lt;br /&gt;
|Pending publication relevant to the project: &lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;big&amp;gt;&#039;&#039;&#039;&#039;&#039;For All External (not CUNY) Project Collaborators and Researchers (F,G)&#039;&#039;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the state/federal ID or EID from other  Academic Institution (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Affiliation outside CUNY if any(e.g. Rutgers University) and valid professional e-mail: ( e.g. John Doe, Rutgers University, jd@rutgers.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Department at NON CUNY  Academic Institution (e.g. MIS Rutgers):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Non CUNY-email (collaborator/external contact):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the collaborator (Academic: e.g. Professor; Partner: e.g. NVIDIA lab):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the external researcher(s) (e.g. principal architect NVIDIA):&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed (example: Cores 100; Time 10 000 hours Memory per core 8 GB, GPU cores  2 GPU hours 100 Storage 100GB):&lt;br /&gt;
|-&lt;br /&gt;
|Description of available local resources. Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
&#039;&#039;type of computational  (cluster, advanced workstation):&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conferences and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&amp;lt;big&amp;gt;&#039;&#039;&#039;For All CUNY Graduate and Undergraduate Classes (E)&#039;&#039;&#039;&amp;lt;/big&amp;gt;&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. 22341356):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Valid CUNY e-mail.&#039;&#039;&#039; Public emails are not accepted (e.g. azho@cix.csi.cuny.edu):  &lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class ID (e.g. CS 220):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class Section (e.g. 02):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|College (e.g. Baruch College):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Name of the Professor:&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Term (e.g. Fall 2025): &lt;br /&gt;
!&lt;br /&gt;
|}&lt;br /&gt;
Upon creation every research user account is provided with 50 GB (and max of 10000 files on /global/u) home directory mounted as &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;. If required, a user may request an increase in the size of their home directory. The HPC Center will endeavor to satisfy reasonable requests. If you expect to have more than 10 000 files please zip several small files into larger single zip file. Please keep only wrangled information in your space in order to optimize use of the existing storage. &lt;br /&gt;
&lt;br /&gt;
Student class accounts ( group d)) are ~~blue:provided~~  with 10 GB home directory.  Please note that class accounts and data will be deleted 30 days after the semester ends (unless otherwise agreed upon). Students are responsible for backing up their own data prior to the end of the semester.&lt;br /&gt;
 &lt;br /&gt;
When a user account is established, only the user has read/write to his files.  The user can change his UNIX permissions to allow others in his group to read/write to his file.&lt;br /&gt;
&lt;br /&gt;
Please be sure to notify the HPC Center if user accounts need to be removed or added to a specific re group. Please read below a policies for accounts. Note that accounts are not timeless and non accessed and non active accounts are removed ( see below).&lt;br /&gt;
&lt;br /&gt;
=== User accounts policies ===&lt;br /&gt;
CUNY HPCC applies strict security standards in user accounts management. HPCC uses “account periods”. Account period is &#039;&#039;&#039;one year&#039;&#039;&#039; for accounts in &#039;&#039;&#039;A,C,E and F&#039;&#039;&#039; type of accounts and &#039;&#039;&#039;one semester&#039;&#039;&#039; for &#039;&#039;&#039;b&#039;&#039;&#039; and &#039;&#039;&#039;d&#039;&#039;&#039; type of accounts. All accounts are periodically reviewed and inactive accounts are removed. All student accounts will expire automatically and  will be removed after each semester unless student’s advisor request extension of the student’s account. All user accounts for groups &#039;&#039;&#039;A,C,E and F&#039;&#039;&#039; must be renewed once a year  by Sept 30th. All user accounts in groups &#039;&#039;&#039;b&#039;&#039;&#039; and &#039;&#039;&#039;d&#039;&#039;&#039; must be renewed within 2 weeks after each semester . Accounts not accessed for one account period  and/or not renewed are automatically disabled/locked and will be deleted 60 days from locking. Delete of particular account means unrecoverable remove of all data associated with that account.  &lt;br /&gt;
&lt;br /&gt;
===Reset Password ===&lt;br /&gt;
&lt;br /&gt;
Users must use automatic password reset system.  Click on [https://hpcauth1.csi.cuny.edu/reset/ Reset Password].  Upon resetting the users will get their individual security token on the e-mail address registered with HPCC.&lt;br /&gt;
&lt;br /&gt;
===Close of account===&lt;br /&gt;
If a user would like to close their account, please contact the CUNY at HPCHelp@csi.cuny.edu. &lt;br /&gt;
Supervisors that would like to modify the access of their researchers and/or students working for them should contact the HPC to remove, add or modify access.&lt;br /&gt;
User accounts that are not accessed or renewed for more than a year and one day will be purged along with any data associated with the account. User accounts that are not renewed on time will be locked and users must contact HPCC to get access recovered. &lt;br /&gt;
&lt;br /&gt;
=== Message of the day (MOTD) ===&lt;br /&gt;
Users are encouraged to read the &amp;quot;Message of the day” (MOTD), which is displayed to the user upon logging onto a system.  The MOTD provides information on scheduled maintenance time when systems will be unavailable and/or important changes in the environment that are of import to the user community.  The MOTD is the HPC Center’s only efficient mechanism for communicating to the broader user community as bulk email messages are often blocked by CUNY SPAM filters.&lt;br /&gt;
&lt;br /&gt;
===   Required citations ===&lt;br /&gt;
The CUNY HPC Center appreciates the support it has received from the National Science Foundation (NSF).  It is the policy of NSF that researchers who are funded by NSF or who make use of facilities funded by NSF acknowledge the contribution of NSF by including the following citation in their papers and presentations:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;This research was supported, in part, under National Science Foundation Grants: CNS-0958379, CNS-0855217, ACI-1126113 and OAC-2215760 (2022) and the City University of New York High Performance Computing Center at the College of Staten Island.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The HPC Center, therefore, requests its users to follow this procedure as it helps the Center to demonstrate that NSF’s investments aided the research and educational missions of the University.&lt;br /&gt;
&lt;br /&gt;
== Reporting requirements ==&lt;br /&gt;
The Center reports on its support of the research and educational community to both funding agencies and CUNY on an annual basis.  Citations are an important factor that is included in these reports.  Therefore, is mandatory users to send copies of research papers developed, in part, using the HPC Center resources to [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu]. Accounts for users who violate that requirement may not be renewed.  Reporting results obtained with HPC resources also helps the Center to keep abreast of user research requirement directions and needs. &lt;br /&gt;
&lt;br /&gt;
== Funding of computational resources and storage ==&lt;br /&gt;
Systems at HPC Center are purchased with grants from National Science Foundation (NSF), grants from NYC, a grant from DASNY and a grant from CUNY&#039;s office of the CIO. In addition all systems in condo tier are purchased with direct funds from research groups. The largest financial support comes from &#039;&#039;&#039;NSF MRI grants (more than 80% of all funding).&#039;&#039;&#039;  CUNY own investment  constitute &#039;&#039;&#039;8.6%&#039;&#039;&#039; of all funds.  Here is the list of all grants for CUNY-HPCC.  &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;PFSS and GPU Nodes:&#039;&#039;&#039; NSF Grant OAC-2215760 (operational) &lt;br /&gt;
:&#039;&#039;&#039;DSMS&#039;&#039;&#039; NSF Grant ACI-1126113 (server is partially retired) &lt;br /&gt;
:&#039;&#039;&#039;BLUE MOON&#039;&#039;&#039;, Grant NYC 042-ST030-015 (operational)&lt;br /&gt;
:&#039;&#039;&#039;CRYO,&#039;&#039;&#039; Grant DASNY 208684-000 OP (operational) &lt;br /&gt;
:&#039;&#039;&#039;ANDY&#039;&#039;&#039;, NSF Grant CNS-0855217 and the New York City Council through the efforts of Borough President James Oddo ( server is fully retired)&lt;br /&gt;
:&#039;&#039;&#039;APPEL&#039;&#039;&#039;, New York State Regional Economic Development Grant through the efforts of State Senator Diane Savino (operational)&lt;br /&gt;
:&#039;&#039;&#039;PENZIAS&#039;&#039;&#039;, The Office of the CUNY Chief Information Officer ( Server is partially retired)&lt;br /&gt;
:&#039;&#039;&#039;SALK&#039;&#039;&#039;, NSF Grant CNS-0958379 and a New York State Regional Economic Development Grant (Server is fully retired)&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=957</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=957"/>
		<updated>2026-03-05T19:19:56Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Free time */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the&lt;br /&gt;
campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314.  HPCC&lt;br /&gt;
goals are to: &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Create opportunities for the CUNY research community to develop new partnerships with the government and private sectors; and&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from CUNY colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=956</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Main_Page&amp;diff=956"/>
		<updated>2026-03-05T19:19:33Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Free time */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
[[Image:hpcc-panorama3.png]]&lt;br /&gt;
&lt;br /&gt;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the&lt;br /&gt;
campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314.  HPCC&lt;br /&gt;
goals are to: &lt;br /&gt;
&lt;br /&gt;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.&lt;br /&gt;
:*Create opportunities for the CUNY research community to develop new partnerships with the government and private sectors; and&lt;br /&gt;
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.&lt;br /&gt;
&lt;br /&gt;
==Organization of systems and data storage (architecture)==&lt;br /&gt;
&lt;br /&gt;
All user data and project data are kept on  Parallel File System Storage (PFSS) which is mounted only on login node(s) of all servers. It holds both user directories and specific partition  called  &#039;&#039;&#039;/scratch (see below).&#039;&#039;&#039; The main features of &#039;&#039;&#039;/scratch&#039;&#039;&#039;  partition are&#039;&#039;&#039;: 1.&#039;&#039;&#039; It is mounted on all computational nodes and on all login nodes &#039;&#039;&#039;2.&#039;&#039;&#039; It is fast &#039;&#039;&#039;3.&#039;&#039;&#039; &#039;&#039;&#039;/scratch&#039;&#039;&#039; partition is temporary space, but is &#039;&#039;&#039;not  a home directory&#039;&#039;&#039;  for accounts nor can be used for long term data preservation.  Users must use &amp;quot;staging&amp;quot; procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.   &lt;br /&gt;
&lt;br /&gt;
Upon  registering with HPCC every user will get 2 directories:&lt;br /&gt;
&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – this is temporary workspace on the HPC systems. Currently scratch resides on the same file system as global/u.&lt;br /&gt;
:•	&#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data&lt;br /&gt;
&lt;br /&gt;
:•	In some instances a user will also have use of disk space on the DSMS in &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/cunyZone/home/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;projectid&amp;gt;&amp;lt;/font color&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; (IRods). &lt;br /&gt;
[[File:HPCC_structure.png|center|frameless|900x900px]]&lt;br /&gt;
The &#039;&#039;&#039;/global/u/&amp;lt;userid&amp;gt;&#039;&#039;&#039; directory has quota (see below for details) while  the &#039;&#039;&#039;/scratch/&amp;lt;userid&amp;gt;&#039;&#039;&#039; do not have. However the &#039;&#039;&#039;/scratch&#039;&#039;&#039; space is cleaned up  following the rules described below. There are no guarantees of any kind that files in &#039;&#039;&#039;/scratch&#039;&#039;&#039;  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called &#039;&amp;lt;nowiki/&amp;gt;&#039;&#039;&#039;&#039;&#039;chizen&#039;&#039;&#039;&#039;&#039;.  The Data Transfer Node called &#039;&#039;&#039;Cea&#039;&#039;&#039; allows file transfer from/to remote sites directly to/from &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039; or to/from   &#039;&#039;&#039;&amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/scratch/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;&#039;&#039;&#039;         &lt;br /&gt;
&lt;br /&gt;
==HPC systems==&lt;br /&gt;
&lt;br /&gt;
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Overview of Computational architectures&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMP&#039;&#039;&#039; servers have several processors (working under a single operating system) which &amp;quot;share everything&amp;quot;.  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster&#039;&#039;&#039; is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a &#039;&#039;&#039;node&#039;&#039;&#039;. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  &lt;br /&gt;
&lt;br /&gt;
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called &#039;&#039;&#039;Arrow&#039;&#039;&#039;.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster &#039;&#039;&#039;Herbert&#039;&#039;&#039; dedicated only to education. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed shared memory&#039;&#039;&#039; computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the &#039;&#039;&#039;NUMA&#039;&#039;&#039; systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the &#039;&#039;&#039;NUMA&#039;&#039;&#039; node at Arrow named &#039;&#039;&#039;Appel&#039;&#039;&#039;.  This node does not have GPU. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039; Infrastructure systems&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
o	Master Head Node (&#039;&#039;&#039;MHN/Arrow)&#039;&#039;&#039; is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.  &lt;br /&gt;
&lt;br /&gt;
o	&#039;&#039;&#039;Chizen&#039;&#039;&#039; is a redundant gateway server which provides access to protected HPCC domain.&lt;br /&gt;
&lt;br /&gt;
o	 &#039;&#039;&#039;Cea&#039;&#039;&#039; is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/&amp;lt;usarid&amp;gt;. &#039;&#039;&#039;Cea&#039;&#039;&#039; is accessible directly (not only via &#039;&#039;&#039;Chizen&#039;&#039;&#039;), but allows only limited set of shell commands.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Table 1&#039;&#039;&#039; below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Master Head Node&lt;br /&gt;
!Sub System&lt;br /&gt;
!Tier&lt;br /&gt;
!Type&lt;br /&gt;
!Type of Jobs&lt;br /&gt;
!Nodes&lt;br /&gt;
!CPU Cores&lt;br /&gt;
!GPUs&lt;br /&gt;
!Mem/node&lt;br /&gt;
!Mem/core&lt;br /&gt;
!Chip Type&lt;br /&gt;
!GPU Type and Interface&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;17&amp;quot; |&#039;&#039;&#039;&amp;lt;big&amp;gt;Arrow&amp;lt;/big&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Penzias&lt;br /&gt;
| rowspan=&amp;quot;10&amp;quot; |Advanced&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot; |Hybrid Cluster&lt;br /&gt;
|Sequential &amp;amp; Parallel jobs w/wo GPU&lt;br /&gt;
|66&lt;br /&gt;
|16 &lt;br /&gt;
|2&lt;br /&gt;
|64 GB&lt;br /&gt;
|4 GB&lt;br /&gt;
|SB, EP 2.20 GHz&lt;br /&gt;
|K20m GPU, PCIe v2&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |Sequential &amp;amp; Parallel jobs&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |1&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|1500 GB&lt;br /&gt;
|62 GB&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|36&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|24&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|32 GB&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Appel&lt;br /&gt;
|NUMA&lt;br /&gt;
|Massive Parallel, sequential, OpenMP&lt;br /&gt;
|1&lt;br /&gt;
|384&lt;br /&gt;
| -&lt;br /&gt;
|11 TB&lt;br /&gt;
|28 GB&lt;br /&gt;
|IB, 3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Cryo&lt;br /&gt;
|SMP&lt;br /&gt;
|Sequential and Parallel jobs, with GPU&lt;br /&gt;
|1&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|1500 GB&lt;br /&gt;
|37 GB&lt;br /&gt;
|SL, 2.40 GHz&lt;br /&gt;
|V100 (32GB) GPU, SXM&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Blue Moon&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Hybrid Cluster&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Sequential and Parallel jobs w/wo GPU&lt;br /&gt;
|24&lt;br /&gt;
|32&lt;br /&gt;
| -&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |192 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |6 GB&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SL, 2.10 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|32 &lt;br /&gt;
|2&lt;br /&gt;
|V100(16GB) GPU, PCIe&lt;br /&gt;
|-&lt;br /&gt;
|Karle&lt;br /&gt;
|SMP&lt;br /&gt;
|Visualization, MATLAB/Mathematica&lt;br /&gt;
|1&lt;br /&gt;
|36*&lt;br /&gt;
| -&lt;br /&gt;
|768 GB&lt;br /&gt;
|21 GB&lt;br /&gt;
|HL, 2.30 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
|Chizen&lt;br /&gt;
|Gateway&lt;br /&gt;
|No jobs allowed&lt;br /&gt;
| colspan=&amp;quot;7&amp;quot; | -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CFD&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot; |Parallel, Seq, OpenMP &lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|768 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 4.8 GHz&lt;br /&gt;
|A40, PCIe, v4 &lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |PHYS&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|640 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4 GHz&lt;br /&gt;
|L40, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
| -&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 4.3 GHz&lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |CHEM&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |Condo&lt;br /&gt;
| rowspan=&amp;quot;2&amp;quot; |SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|EM, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|512 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.0 GHz&lt;br /&gt;
|A100/40, SXM&lt;br /&gt;
|-&lt;br /&gt;
|ASRC&lt;br /&gt;
|Condo&lt;br /&gt;
|SMP&lt;br /&gt;
|1&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|256 GB&lt;br /&gt;
|&lt;br /&gt;
|ER, 2.8 GHz&lt;br /&gt;
|A30, PCIe, v4&lt;br /&gt;
|}&lt;br /&gt;
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA   &lt;br /&gt;
&lt;br /&gt;
== Recovery of  operational costs ==&lt;br /&gt;
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only &#039;&#039;&#039;&amp;lt;u&amp;gt;operational costs with no profit for HPCC&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on &#039;&#039;&#039;&amp;lt;u&amp;gt;unit-hour&amp;lt;/u&amp;gt;&#039;&#039;&#039;. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Definitions of unit-hour&lt;br /&gt;
!Type of resource&lt;br /&gt;
!Unit-hour &lt;br /&gt;
!For V100, A30, A40 or L40&lt;br /&gt;
!For A100&lt;br /&gt;
|-&lt;br /&gt;
|CPU unit &lt;br /&gt;
|1 cpu core/hour&lt;br /&gt;
| --&lt;br /&gt;
| --&lt;br /&gt;
|-&lt;br /&gt;
|GPU unit &lt;br /&gt;
|(4 cpu cores + 1 GPU thread )/hour&lt;br /&gt;
|4 cpu cores + 1 GPU&lt;br /&gt;
|4 cpu cores and 1/7 A100 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== HPCC access plans  ===&lt;br /&gt;
a.     &#039;&#039;&#039;Minimum access (MAP):&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.&lt;br /&gt;
&lt;br /&gt;
The MAP has 3 tiers:&lt;br /&gt;
&lt;br /&gt;
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges. &lt;br /&gt;
&lt;br /&gt;
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening. &lt;br /&gt;
&lt;br /&gt;
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  &lt;br /&gt;
&lt;br /&gt;
The MAP users get charged per CPU/GPU hour at low rate of &#039;&#039;&#039;&amp;lt;u&amp;gt;$0.015 per cpu hour and $0.09 per GPU hour&amp;lt;/u&amp;gt;.&#039;&#039;&#039;   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for MAP users&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.015/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|4&lt;br /&gt;
|0&lt;br /&gt;
|$0.24/hour&lt;br /&gt;
|-&lt;br /&gt;
| 4 cores + 1 GPU &lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.15/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.33/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2  &lt;br /&gt;
|$0.42/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2  &lt;br /&gt;
|$0.66/hour&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$1.32/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
b.     &#039;&#039;&#039;Computing on demand (CODP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are &#039;&#039;&#039;$0.018 per cpu hour and $0.11 per GPU hour.&#039;&#039;&#039;  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Cost recovery fees for CODP plan&lt;br /&gt;
|Job&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/hour&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|$0.018/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$0.288/hour&lt;br /&gt;
|-&lt;br /&gt;
|4 cores + 1 GPU&lt;br /&gt;
|4&lt;br /&gt;
|1&lt;br /&gt;
|$0.293/hour&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$0.334/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 1 GPU&lt;br /&gt;
|32&lt;br /&gt;
|1  &lt;br /&gt;
|$0.666/hour&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU &lt;br /&gt;
|32&lt;br /&gt;
|2&lt;br /&gt;
|$0.756/hour&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
c.  &#039;&#039;&#039;Leasing node(s) (LNP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.   &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Lease node(s) fees for MAP users&lt;br /&gt;
|Job (MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/30 days&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$172.80&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$264.96&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 2 GPU&lt;br /&gt;
|16&lt;br /&gt;
|1  &lt;br /&gt;
|$302.40&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$475.20&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40&lt;br /&gt;
|8&lt;br /&gt;
|$760.0&lt;br /&gt;
|-&lt;br /&gt;
|64 cores + 8 GPU&lt;br /&gt;
|64&lt;br /&gt;
|8&lt;br /&gt;
|$950.40&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Fees for lease a node(s) for &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; non &amp;lt;/span&amp;gt; -MAP users&lt;br /&gt;
|Job (non-MAP users)&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/month&lt;br /&gt;
|-&lt;br /&gt;
|1 core no GPU&lt;br /&gt;
|1&lt;br /&gt;
|0&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|16 cores no GPU&lt;br /&gt;
|16&lt;br /&gt;
|0&lt;br /&gt;
|$249.82&lt;br /&gt;
|-&lt;br /&gt;
|32 cores no GPU&lt;br /&gt;
|32&lt;br /&gt;
|0&lt;br /&gt;
|$497.64&lt;br /&gt;
|-&lt;br /&gt;
|16 cores + 1 GPU&lt;br /&gt;
|16&lt;br /&gt;
|2 &lt;br /&gt;
|$443.23&lt;br /&gt;
|-&lt;br /&gt;
|32 cores + 2 GPU&lt;br /&gt;
|32&lt;br /&gt;
|2 &lt;br /&gt;
|$886.64&lt;br /&gt;
|-&lt;br /&gt;
|40 cores + 8 GPU&lt;br /&gt;
|40 &lt;br /&gt;
|8&lt;br /&gt;
|$1399.68&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
d.     &#039;&#039;&#039;Condo Ownership (COP)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.   &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Condo owners costs per year&lt;br /&gt;
|Type of condo node&lt;br /&gt;
|Cpu cores&lt;br /&gt;
|GPU&lt;br /&gt;
|Cost/year&lt;br /&gt;
|-&lt;br /&gt;
|Large hybrid SXM&lt;br /&gt;
|128&lt;br /&gt;
|8&lt;br /&gt;
|$4518.92 &lt;br /&gt;
|-&lt;br /&gt;
|Small hybrid&lt;br /&gt;
|48&lt;br /&gt;
|2&lt;br /&gt;
|$1540.54&lt;br /&gt;
|-&lt;br /&gt;
|Medium compute&lt;br /&gt;
|96&lt;br /&gt;
|0&lt;br /&gt;
|$2464.86&lt;br /&gt;
|-&lt;br /&gt;
|Large compute&lt;br /&gt;
|128&lt;br /&gt;
|0&lt;br /&gt;
|$3286.49&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.    &lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+Type of nodes and lease fees for condo nodes&lt;br /&gt;
!Type of node&lt;br /&gt;
!Renters cost/month&lt;br /&gt;
!Long term (90+ days) rent cost/month&lt;br /&gt;
!CPU/node&lt;br /&gt;
!CPU type&lt;br /&gt;
!GPU/node &lt;br /&gt;
!GPU type&lt;br /&gt;
!GPU interface&lt;br /&gt;
|-&lt;br /&gt;
|Laghe Hybrid&lt;br /&gt;
|$602.52&lt;br /&gt;
|$564.86&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.2 GHz&lt;br /&gt;
|8&lt;br /&gt;
|A100/80&lt;br /&gt;
|SXM&lt;br /&gt;
|-&lt;br /&gt;
|Small Hybrid&lt;br /&gt;
|$205.41&lt;br /&gt;
|$192.57&lt;br /&gt;
|48&lt;br /&gt;
|EPYC, 2.8 GHz&lt;br /&gt;
|2&lt;br /&gt;
|A40, A30, L40&lt;br /&gt;
|PCIe v4&lt;br /&gt;
|-&lt;br /&gt;
|Medium Non GPU&lt;br /&gt;
|$328.65&lt;br /&gt;
|$308.11&lt;br /&gt;
|96&lt;br /&gt;
|EPYC, 4.11GHz&lt;br /&gt;
|48&lt;br /&gt;
|None&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Lagre Non GPU&lt;br /&gt;
|$438.20&lt;br /&gt;
|$410.81&lt;br /&gt;
|128&lt;br /&gt;
|EPYC, 2.0 GHz&lt;br /&gt;
|128&lt;br /&gt;
|None &lt;br /&gt;
|NA&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Free time ===&lt;br /&gt;
Any new project &#039;&#039;&#039;from colleges/centers that participate in MAP-B or  MAP-C&#039;&#039;&#039; is entitled to get free &#039;&#039;&#039;11520 CPU hours and 1440 GPU hours.&#039;&#039;&#039;  Users under &#039;&#039;&#039;MAP-A&#039;&#039;&#039; are not entitled for free time. The free compute hours are intended to help to establish a project and thus are shared for all members of the project. Thus compute free hours can be used either by PI  or by any number of project&#039;s members. It is important to note that &amp;lt;u&amp;gt;free time is per project not per user account, so any project can have free time only once. External collaborators to CUNY are not normally eligible for free time.&amp;lt;/u&amp;gt; Additional hours beyond free time are charged with MAP plan rates. &#039;&#039;&#039;&amp;lt;u&amp;gt;Please contact CUNY-HPCC director for  further details.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Support for research grants ==&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;All proposals dated on Jan 1st 2026 (&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt; 01/01/26 &amp;lt;/span&amp;gt;) and later&amp;lt;/u&amp;gt;&#039;&#039;&#039; that require computational resources &#039;&#039;&#039;&amp;lt;u&amp;gt;must include budget for cost recovery fees at CUNY-HPCC.&amp;lt;/u&amp;gt;&#039;&#039;&#039;  For a project the PI can choose between: &lt;br /&gt;
&lt;br /&gt;
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource. &lt;br /&gt;
* use &amp;quot;on-demand&amp;quot; resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed. &lt;br /&gt;
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science). &lt;br /&gt;
&lt;br /&gt;
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  &#039;&#039;&#039;&amp;lt;u&amp;gt;contact the Director of CUNY-HPCC Dr. Alexander Tzanov&amp;lt;/u&amp;gt;&#039;&#039;&#039;  (alexander.tzanov@csi.cuny.edu) and discuss  the project&#039;s computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    &lt;br /&gt;
&lt;br /&gt;
== Partitions and jobs ==&lt;br /&gt;
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in &#039;&#039;&#039;partitions&#039;&#039;&#039;. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).&lt;br /&gt;
{| class=&amp;quot;wikitable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!Partition&lt;br /&gt;
!Max cores/job&lt;br /&gt;
!Max jobs/user&lt;br /&gt;
!Total cores/group&lt;br /&gt;
!Time limits&lt;br /&gt;
!Tier&lt;br /&gt;
!&lt;br /&gt;
!GPU types&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|partnsf&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|K20m, V100/16, A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partchem&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A100/80, A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partcfd&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partsym&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partasrc&lt;br /&gt;
|48&lt;br /&gt;
|16&lt;br /&gt;
|16&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|A30&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabD&lt;br /&gt;
|128&lt;br /&gt;
|50&lt;br /&gt;
|256&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|V100/16,A100/40&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partmatlabN&lt;br /&gt;
|384&lt;br /&gt;
|50&lt;br /&gt;
|384&lt;br /&gt;
|240 Hours&lt;br /&gt;
|Advanced&lt;br /&gt;
|&lt;br /&gt;
|None&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|partphys&lt;br /&gt;
|96&lt;br /&gt;
|50&lt;br /&gt;
|96&lt;br /&gt;
|No limit&lt;br /&gt;
|Condo&lt;br /&gt;
|&lt;br /&gt;
|L40&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;partnsf&#039;&#039;&#039; is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.&lt;br /&gt;
* &#039;&#039;&#039;partchem&#039;&#039;&#039;  is CONDO partition.  &lt;br /&gt;
* &#039;&#039;&#039;partphys&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partsym&#039;&#039;&#039;    is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partasrc&#039;&#039;&#039;   is CONDO partition&lt;br /&gt;
* &#039;&#039;&#039;partmatlabD&#039;&#039;&#039; partition allows to run MATLAB&#039;s Distributes Parallel  Server across main cluster. &lt;br /&gt;
* &#039;&#039;&#039;partmatlabN&#039;&#039;&#039; partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox&lt;br /&gt;
* &#039;&#039;&#039;partdev&#039;&#039;&#039; is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.&lt;br /&gt;
&lt;br /&gt;
== Hours of Operation ==&lt;br /&gt;
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.&lt;br /&gt;
&lt;br /&gt;
== User Support ==&lt;br /&gt;
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center&#039;s systems to ease the transfer of applications&lt;br /&gt;
and run scripts among them.  &lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY&lt;br /&gt;
community in parallel programming techniques, HPC computing architecture, and the essentials of using our&lt;br /&gt;
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.    &lt;br /&gt;
&lt;br /&gt;
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:&lt;br /&gt;
&lt;br /&gt;
  [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu] &lt;br /&gt;
&lt;br /&gt;
== Warnings and modes of operation ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. hpchelp@csi.cuny.edu is for questions and accounts help communication &#039;&#039;&#039;only&#039;&#039;&#039; and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response. &lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;&#039;E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.&#039;&#039;&#039; Messages originated from public mailers (google, hotmail, etc) are filtered out.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;&#039;Do not send questions to individual CUNY HPC Center staff members directly.&#039;&#039;&#039;  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared&lt;br /&gt;
to other HPC Centers of similar size &#039;&#039;&#039;our staff is extremely lean&#039;&#039;&#039;.  Please make full use of the tools that we have provided (especially&lt;br /&gt;
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using&lt;br /&gt;
our systems will be predictably good and productive.&lt;br /&gt;
&lt;br /&gt;
== User Manual ==&lt;br /&gt;
&lt;br /&gt;
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=955</id>
		<title>Administrative Information</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=955"/>
		<updated>2026-03-05T19:07:33Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* User accounts policies */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
==How to get an account==&lt;br /&gt;
&lt;br /&gt;
=== Definitions and procedures ===&lt;br /&gt;
CUNY-HPCC operates on cost recovery scheme which requires all accounts to be associated with research project(s) or to be class accounts. The research accounts are sponsored by Principle Investigator (PI). A &#039;&#039;&#039;Principal Investigator (PI) at CUNY is defined as the lead researcher responsible for the design, execution, and management of a research project, ensuring compliance with regulations and overseeing the project&#039;s financial aspects. PI is a &amp;lt;u&amp;gt;faculty member or a qualified researcher&amp;lt;/u&amp;gt; who has the authority to apply for research funding and manage the project.&#039;&#039;&#039;  The procedure to open an account is as follows: &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1.&#039;&#039;&#039;  Creation of sponsor account (PI account) - &#039;&#039;&#039;form A or B below.&#039;&#039;&#039; At that step the PI must create account for him/her and provide information about project title, funding and duration. The request of resources is not mandatory.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2.&#039;&#039;&#039;  Upon creating account the PI will get unique code which has to be shared with members of a group (students and post docs) that require account on HPCC.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3.&#039;&#039;&#039;  Members of research group (lab) and academic collaborators can apply for account at CUNY-HPCC by using form C,D, E or F It is mandatory to use a code mentioned in Step 2 (from CUNY PI) in these forms.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 4.&#039;&#039;&#039; PI should assign students to his project.   &lt;br /&gt;
&lt;br /&gt;
===Accounts overview===&lt;br /&gt;
All users of HPCC resources must register with HPCC for an account described in a table below. A user account is issued to an &#039;&#039;&#039;&#039;&#039;individual user&#039;&#039;&#039;.&#039;&#039; Accounts are &#039;&#039;&#039;not to be shared&#039;&#039;&#039;.  HPCC &#039;&#039;&#039;&amp;lt;u&amp;gt;will communicate only via CUNY e-mails. to users from groups A to E.&amp;lt;/u&amp;gt;&#039;&#039;&#039; HPCC will communicate with users from &#039;&#039;&#039;F,&#039;&#039;&#039; and &#039;&#039;&#039;G&#039;&#039;&#039; account types  via users&#039; verified work account, CC to the CUNY collaborator (for F only). In addition if resources are available and per discretion of CUNY-HPCC director, external to CUNY researchers can obtain external research account (type G) at CUNY-HPCC  by renting HPC resources and paying in advance the full cost recovery fee. Please contact the HPCC director for details. &lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!User accounts for:&lt;br /&gt;
!Type&lt;br /&gt;
!Conditions&lt;br /&gt;
!Renuwal cycles&lt;br /&gt;
!Conditions&lt;br /&gt;
!Mandatory conditions&lt;br /&gt;
|-&lt;br /&gt;
|Faculty, Research Staff&lt;br /&gt;
|A&lt;br /&gt;
|Renews every year at the beginning of Fall semester&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Adjunct Faculty &lt;br /&gt;
|B&lt;br /&gt;
|Renews every semester (Fall/Spring) &lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Doctoral Graduate Students&lt;br /&gt;
|C&lt;br /&gt;
|Renews every year the beginning of Fall semester&lt;br /&gt;
|14&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Master Students&lt;br /&gt;
|D&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.&lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Undergraduate Students&lt;br /&gt;
|E&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 7 days from renewal date, data and home directory are removed after 15 days. No backup of any data. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Academic Collaborators&lt;br /&gt;
|F&lt;br /&gt;
|Renews once a year (Fall) for the duration of a project&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|other institution EID, work e-mail and valid CUNY collaborator e-mail&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Public and Private Sector Partners&lt;br /&gt;
|G&lt;br /&gt;
|No Renewal. Good only for the duration of contract. &lt;br /&gt;
|NA&lt;br /&gt;
|Account expires at the date of expiring of the contract. &lt;br /&gt;
|state/federal ID, verified work e-mail.&#039;&#039;&#039;Advanced pay of full cost for rented resource.&#039;&#039;&#039;&lt;br /&gt;
|}  &lt;br /&gt;
&lt;br /&gt;
Users who missed renewal for less than 90 days should communicate with HPCC via e-mail to &#039;&#039;&#039;hpchelp@csi.cuny.edu&#039;&#039;&#039; for account recovery. All users must inform HPCC for changes it their academic status. It is mandatory to specify information (or NA) on all points from below list. Please do not forget to provide information about past and pending &#039;&#039;&#039;&amp;lt;u&amp;gt;publications&amp;lt;/u&amp;gt;&#039;&#039;&#039; and funded projects and &amp;lt;u&amp;gt;information about your local available resources (local servers and workstations/desktops only).&amp;lt;/u&amp;gt;  Think carefully about the resources needed and try to estimate as accurate as possible.  Note that &#039;&#039;&#039;by applying for and obtaining an account, the user agrees to the HPCC End User Policy (EUP) and  Mandatory Security Requirements for Access  (MSRA).&#039;&#039;&#039;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+Required Information for opening of  HPCC account. Please provide information in all fields and/or mark NA when needed. &lt;br /&gt;
!&lt;br /&gt;
! rowspan=&amp;quot;26&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;&amp;lt;big&amp;gt;For All CUNY Faculty, Staff And Graduate Students&amp;lt;/big&amp;gt;&#039;&#039;&#039;  &#039;&#039;&#039;&amp;lt;big&amp;gt;(A ,B,C,D)&amp;lt;/big&amp;gt;&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. John A. Smith  22341356 jsmith@csi.cuny.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|CUNY &#039;&#039;Academic status ( faculty, adjunct faculty, PhD student, MS student, research staff):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Primary&#039;&#039;&#039; Affiliation within CUNY - campus name and Department ( e.g Hunter College, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Secondary&#039;&#039;&#039; CUNY affiliation if any. Provide campus name and Department (e.g. Graduate Center, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Name, department and college affiliation of PI/Advisor (e.g. John Smith, Biology, Hunter College):&lt;br /&gt;
|-&lt;br /&gt;
|If out of College of Staten Island provide description of  local resources available. &lt;br /&gt;
:Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
::College (e.g. Hunter)&lt;br /&gt;
::Type of resource (e.g. Department cluster):&lt;br /&gt;
:::&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-&#039;&#039; &#039;&#039;type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed for the project:&lt;br /&gt;
::- CPU Cores (e.g 1000):&lt;br /&gt;
::- GPU options (e.g 2 x V100/16 GB):&lt;br /&gt;
::- V100/16 GB -&lt;br /&gt;
::- V100/32 GB -  &lt;br /&gt;
::- L40/48 GB - &lt;br /&gt;
::- A30/24 GB -&lt;br /&gt;
::- A40/24 GB - &lt;br /&gt;
::- A100/40 GB - &lt;br /&gt;
::- A100/80 GB - &lt;br /&gt;
::- Storage Space (above 50 GB)&lt;br /&gt;
::- Backup of data (Y/N):&lt;br /&gt;
::- Archive of data (Y/N):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Title of the project:&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Short Description of the  project (up to 100 words): &lt;br /&gt;
|-&lt;br /&gt;
|Funding sources of the project (e.g. NSF grant #, CUNY):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conference presentations, posters and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Number of refereed publications relevant to the project:&lt;br /&gt;
|-&lt;br /&gt;
|Pending publication relevant to the project: &lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;big&amp;gt;&#039;&#039;&#039;&#039;&#039;For All External (not CUNY) Project Collaborators and Researchers (F,G)&#039;&#039;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the state/federal ID or EID from other  Academic Institution (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Affiliation outside CUNY if any(e.g. Rutgers University) and valid professional e-mail: ( e.g. John Doe, Rutgers University, jd@rutgers.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Department at NON CUNY  Academic Institution (e.g. MIS Rutgers):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Non CUNY-email (collaborator/external contact):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the collaborator (Academic: e.g. Professor; Partner: e.g. NVIDIA lab):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the external researcher(s) (e.g. principal architect NVIDIA):&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed (example: Cores 100; Time 10 000 hours Memory per core 8 GB, GPU cores  2 GPU hours 100 Storage 100GB):&lt;br /&gt;
|-&lt;br /&gt;
|Description of available local resources. Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
&#039;&#039;type of computational  (cluster, advanced workstation):&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conferences and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&amp;lt;big&amp;gt;&#039;&#039;&#039;For All CUNY Graduate and Undergraduate Classes (E)&#039;&#039;&#039;&amp;lt;/big&amp;gt;&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. 22341356):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Valid CUNY e-mail.&#039;&#039;&#039; Public emails are not accepted (e.g. azho@cix.csi.cuny.edu):  &lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class ID (e.g. CS 220):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class Section (e.g. 02):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|College (e.g. Baruch College):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Name of the Professor:&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Term (e.g. Fall 2025): &lt;br /&gt;
!&lt;br /&gt;
|}&lt;br /&gt;
Upon creation every research user account is provided with 50 GB (and max of 10000 files on /global/u) home directory mounted as &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;. If required, a user may request an increase in the size of their home directory. The HPC Center will endeavor to satisfy reasonable requests. If you expect to have more than 10 000 files please zip several small files into larger single zip file. Please keep only wrangled information in your space in order to optimize use of the existing storage. &lt;br /&gt;
&lt;br /&gt;
Student class accounts ( group d)) are ~~blue:provided~~  with 10 GB home directory.  Please note that class accounts and data will be deleted 30 days after the semester ends (unless otherwise agreed upon). Students are responsible for backing up their own data prior to the end of the semester.&lt;br /&gt;
 &lt;br /&gt;
When a user account is established, only the user has read/write to his files.  The user can change his UNIX permissions to allow others in his group to read/write to his file.&lt;br /&gt;
&lt;br /&gt;
Please be sure to notify the HPC Center if user accounts need to be removed or added to a specific re group. Please read below a policies for accounts. Note that accounts are not timeless and non accessed and non active accounts are removed ( see below).&lt;br /&gt;
&lt;br /&gt;
=== User accounts policies ===&lt;br /&gt;
CUNY HPCC applies strict security standards in user accounts management. HPCC uses “account periods”. Account period is &#039;&#039;&#039;one year&#039;&#039;&#039; for accounts in &#039;&#039;&#039;A,C,E and F&#039;&#039;&#039; type of accounts and &#039;&#039;&#039;one semester&#039;&#039;&#039; for &#039;&#039;&#039;b&#039;&#039;&#039; and &#039;&#039;&#039;d&#039;&#039;&#039; type of accounts. All accounts are periodically reviewed and inactive accounts are removed. All student accounts will expire automatically and  will be removed after each semester unless student’s advisor request extension of the student’s account. All user accounts for groups &#039;&#039;&#039;A,C,E and F&#039;&#039;&#039; must be renewed once a year  by Sept 30th. All user accounts in groups &#039;&#039;&#039;b&#039;&#039;&#039; and &#039;&#039;&#039;d&#039;&#039;&#039; must be renewed within 2 weeks after each semester . Accounts not accessed for one account period  and/or not renewed are automatically disabled/locked and will be deleted 60 days from locking. Delete of particular account means unrecoverable remove of all data associated with that account.  &lt;br /&gt;
&lt;br /&gt;
===Reset Password ===&lt;br /&gt;
&lt;br /&gt;
Users must use automatic password reset system.  Click on [https://hpcauth1.csi.cuny.edu/reset/ Reset Password].  Upon resetting the users will get their individual security token on the e-mail address registered with HPCC.&lt;br /&gt;
&lt;br /&gt;
===Close of account===&lt;br /&gt;
If a user would like to close their account, please contact the CUNY at HPCHelp@csi.cuny.edu. &lt;br /&gt;
Supervisors that would like to modify the access of their researchers and/or students working for them should contact the HPC to remove, add or modify access.&lt;br /&gt;
User accounts that are not accessed or renewed for more than a year and one day will be purged along with any data associated with the account. User accounts that are not renewed on time will be locked and users must contact HPCC to get access recovered. &lt;br /&gt;
&lt;br /&gt;
=== Message of the day (MOTD) ===&lt;br /&gt;
Users are encouraged to read the &amp;quot;Message of the day” (MOTD), which is displayed to the user upon logging onto a system.  The MOTD provides information on scheduled maintenance time when systems will be unavailable and/or important changes in the environment that are of import to the user community.  The MOTD is the HPC Center’s only efficient mechanism for communicating to the broader user community as bulk email messages are often blocked by CUNY SPAM filters.&lt;br /&gt;
&lt;br /&gt;
===   Required citations ===&lt;br /&gt;
The CUNY HPC Center appreciates the support it has received from the National Science Foundation (NSF).  It is the policy of NSF that researchers who are funded by NSF or who make use of facilities funded by NSF acknowledge the contribution of NSF by including the following citation in their papers and presentations:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;This research was supported, in part, under National Science Foundation Grants: CNS-0958379, CNS-0855217, ACI-1126113 and OAC-2215760 (2022) and the City University of New York High Performance Computing Center at the College of Staten Island.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The HPC Center, therefore, requests its users to follow this procedure as it helps the Center to demonstrate that NSF’s investments aided the research and educational missions of the University.&lt;br /&gt;
&lt;br /&gt;
== Reporting requirements ==&lt;br /&gt;
The Center reports on its support of the research and educational community to both funding agencies and CUNY on an annual basis.  Citations are an important factor that is included in these reports.  Therefore, is mandatory users to send copies of research papers developed, in part, using the HPC Center resources to [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu]. Accounts for users who violate that requirement may not be renewed.  Reporting results obtained with HPC resources also helps the Center to keep abreast of user research requirement directions and needs. &lt;br /&gt;
&lt;br /&gt;
== Funding of computational resources and storage ==&lt;br /&gt;
Systems at HPC Center are purchased with grants from National Science Foundation (NSF), grants from NYC, a grant from DASNY and a grant from CUNY&#039;s office of the CIO. In addition all systems in condo tier are purchased with direct funds from research groups. The largest financial support comes from &#039;&#039;&#039;NSF MRI grants (more than 80% of all funding).&#039;&#039;&#039;  CUNY own investment  constitute &#039;&#039;&#039;8.6%&#039;&#039;&#039; of all funds.  Here is the list of all grants for CUNY-HPCC.  &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;PFSS and GPU Nodes:&#039;&#039;&#039; NSF Grant OAC-2215760 (operational) &lt;br /&gt;
:&#039;&#039;&#039;DSMS&#039;&#039;&#039; NSF Grant ACI-1126113 (server is partially retired) &lt;br /&gt;
:&#039;&#039;&#039;BLUE MOON&#039;&#039;&#039;, Grant NYC 042-ST030-015 (operational)&lt;br /&gt;
:&#039;&#039;&#039;CRYO,&#039;&#039;&#039; Grant DASNY 208684-000 OP (operational) &lt;br /&gt;
:&#039;&#039;&#039;ANDY&#039;&#039;&#039;, NSF Grant CNS-0855217 and the New York City Council through the efforts of Borough President James Oddo ( server is fully retired)&lt;br /&gt;
:&#039;&#039;&#039;APPEL&#039;&#039;&#039;, New York State Regional Economic Development Grant through the efforts of State Senator Diane Savino (operational)&lt;br /&gt;
:&#039;&#039;&#039;PENZIAS&#039;&#039;&#039;, The Office of the CUNY Chief Information Officer ( Server is partially retired)&lt;br /&gt;
:&#039;&#039;&#039;SALK&#039;&#039;&#039;, NSF Grant CNS-0958379 and a New York State Regional Economic Development Grant (Server is fully retired)&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=954</id>
		<title>Administrative Information</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=954"/>
		<updated>2026-03-05T19:04:19Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* Definitions and procedures */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
==How to get an account==&lt;br /&gt;
&lt;br /&gt;
=== Definitions and procedures ===&lt;br /&gt;
CUNY-HPCC operates on cost recovery scheme which requires all accounts to be associated with research project(s) or to be class accounts. The research accounts are sponsored by Principle Investigator (PI). A &#039;&#039;&#039;Principal Investigator (PI) at CUNY is the lead researcher responsible for the design, execution, and management of a research project, ensuring compliance with regulations and overseeing the project&#039;s financial aspects. PI is a faculty member or a qualified researcher who has the authority to apply for research funding and manage the project.&#039;&#039;&#039;  The procedure to open an account is as follows: &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1.&#039;&#039;&#039;  Creation of sponsor account (PI account) - &#039;&#039;&#039;form A or B below.&#039;&#039;&#039; At that step the PI must create account for him/her and provide information about project title, funding and duration. The request of resources is not mandatory.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2.&#039;&#039;&#039;  Upon creating account the PI will get unique code which has to be shared with members of a group (students and post docs) that require account on HPCC.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3.&#039;&#039;&#039;  Members of research group (lab) and academic collaborators can apply for account at CUNY-HPCC by using form C,D, E or F It is mandatory to use a code mentioned in Step 2 (from CUNY PI) in these forms.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 4.&#039;&#039;&#039; PI should assign students to his project.   &lt;br /&gt;
&lt;br /&gt;
===Accounts overview===&lt;br /&gt;
All users of HPCC resources must register with HPCC for an account described in a table below. A user account is issued to an &#039;&#039;&#039;&#039;&#039;individual user&#039;&#039;&#039;.&#039;&#039; Accounts are &#039;&#039;&#039;not to be shared&#039;&#039;&#039;.  HPCC &#039;&#039;&#039;&amp;lt;u&amp;gt;will communicate only via CUNY e-mails. to users from groups A to E.&amp;lt;/u&amp;gt;&#039;&#039;&#039; HPCC will communicate with users from F, and G account types  via users&#039; verified work account, CC to the CUNY collaborator (for F only). In addition if resources are available and per discretion of CUNY-HPCC director, external to CUNY researchers can obtain external research account (type G) at CUNY-HPCC  by renting HPC resources and paying in advance the full cost recovery fee. Please contact the HPCC director for details. &lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!User accounts for:&lt;br /&gt;
!Type&lt;br /&gt;
!Conditions&lt;br /&gt;
!Renuwal cycles&lt;br /&gt;
!Conditions&lt;br /&gt;
!Mandatory conditions&lt;br /&gt;
|-&lt;br /&gt;
|Faculty, Research Staff&lt;br /&gt;
|A&lt;br /&gt;
|Renews every year at the beginning of Fall semester&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Adjunct Faculty &lt;br /&gt;
|B&lt;br /&gt;
|Renews every semester (Fall/Spring) &lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Doctoral Graduate Students&lt;br /&gt;
|C&lt;br /&gt;
|Renews every year the beginning of Fall semester&lt;br /&gt;
|14&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Master Students&lt;br /&gt;
|D&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.&lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Undergraduate Students&lt;br /&gt;
|E&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 7 days from renewal date, data and home directory are removed after 15 days. No backup of any data. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Academic Collaborators&lt;br /&gt;
|F&lt;br /&gt;
|Renews once a year (Fall) for the duration of a project&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|other institution EID, work e-mail and valid CUNY collaborator e-mail&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Public and Private Sector Partners&lt;br /&gt;
|G&lt;br /&gt;
|No Renewal. Good only for the duration of contract. &lt;br /&gt;
|NA&lt;br /&gt;
|Account expires at the date of expiring of the contract. &lt;br /&gt;
|state/federal ID, verified work e-mail.&#039;&#039;&#039;Advanced pay of full cost for rented resource.&#039;&#039;&#039;&lt;br /&gt;
|}  &lt;br /&gt;
&lt;br /&gt;
Users who missed renewal for less than 90 days should communicate with HPCC via e-mail to &#039;&#039;&#039;hpchelp@csi.cuny.edu&#039;&#039;&#039; for account recovery. All users must inform HPCC for changes it their academic status. It is mandatory to specify information (or NA) on all points from below list. Please do not forget to provide information about past and pending &#039;&#039;&#039;&amp;lt;u&amp;gt;publications&amp;lt;/u&amp;gt;&#039;&#039;&#039; and funded projects and &amp;lt;u&amp;gt;information about your local available resources (local servers and workstations/desktops only).&amp;lt;/u&amp;gt;  Think carefully about the resources needed and try to estimate as accurate as possible.  Note that &#039;&#039;&#039;by applying for and obtaining an account, the user agrees to the HPCC End User Policy (EUP) and  Mandatory Security Requirements for Access  (MSRA).&#039;&#039;&#039;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+Required Information for opening of  HPCC account. Please provide information in all fields and/or mark NA when needed. &lt;br /&gt;
!&lt;br /&gt;
! rowspan=&amp;quot;26&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;&amp;lt;big&amp;gt;For All CUNY Faculty, Staff And Graduate Students&amp;lt;/big&amp;gt;&#039;&#039;&#039;  &#039;&#039;&#039;&amp;lt;big&amp;gt;(A ,B,C,D)&amp;lt;/big&amp;gt;&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. John A. Smith  22341356 jsmith@csi.cuny.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|CUNY &#039;&#039;Academic status ( faculty, adjunct faculty, PhD student, MS student, research staff):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Primary&#039;&#039;&#039; Affiliation within CUNY - campus name and Department ( e.g Hunter College, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Secondary&#039;&#039;&#039; CUNY affiliation if any. Provide campus name and Department (e.g. Graduate Center, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Name, department and college affiliation of PI/Advisor (e.g. John Smith, Biology, Hunter College):&lt;br /&gt;
|-&lt;br /&gt;
|If out of College of Staten Island provide description of  local resources available. &lt;br /&gt;
:Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
::College (e.g. Hunter)&lt;br /&gt;
::Type of resource (e.g. Department cluster):&lt;br /&gt;
:::&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-&#039;&#039; &#039;&#039;type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed for the project:&lt;br /&gt;
::- CPU Cores (e.g 1000):&lt;br /&gt;
::- GPU options (e.g 2 x V100/16 GB):&lt;br /&gt;
::- V100/16 GB -&lt;br /&gt;
::- V100/32 GB -  &lt;br /&gt;
::- L40/48 GB - &lt;br /&gt;
::- A30/24 GB -&lt;br /&gt;
::- A40/24 GB - &lt;br /&gt;
::- A100/40 GB - &lt;br /&gt;
::- A100/80 GB - &lt;br /&gt;
::- Storage Space (above 50 GB)&lt;br /&gt;
::- Backup of data (Y/N):&lt;br /&gt;
::- Archive of data (Y/N):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Title of the project:&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Short Description of the  project (up to 100 words): &lt;br /&gt;
|-&lt;br /&gt;
|Funding sources of the project (e.g. NSF grant #, CUNY):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conference presentations, posters and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Number of refereed publications relevant to the project:&lt;br /&gt;
|-&lt;br /&gt;
|Pending publication relevant to the project: &lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;big&amp;gt;&#039;&#039;&#039;&#039;&#039;For All External (not CUNY) Project Collaborators and Researchers (F,G)&#039;&#039;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the state/federal ID or EID from other  Academic Institution (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Affiliation outside CUNY if any(e.g. Rutgers University) and valid professional e-mail: ( e.g. John Doe, Rutgers University, jd@rutgers.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Department at NON CUNY  Academic Institution (e.g. MIS Rutgers):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Non CUNY-email (collaborator/external contact):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the collaborator (Academic: e.g. Professor; Partner: e.g. NVIDIA lab):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the external researcher(s) (e.g. principal architect NVIDIA):&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed (example: Cores 100; Time 10 000 hours Memory per core 8 GB, GPU cores  2 GPU hours 100 Storage 100GB):&lt;br /&gt;
|-&lt;br /&gt;
|Description of available local resources. Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
&#039;&#039;type of computational  (cluster, advanced workstation):&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conferences and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&amp;lt;big&amp;gt;&#039;&#039;&#039;For All CUNY Graduate and Undergraduate Classes (E)&#039;&#039;&#039;&amp;lt;/big&amp;gt;&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. 22341356):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Valid CUNY e-mail.&#039;&#039;&#039; Public emails are not accepted (e.g. azho@cix.csi.cuny.edu):  &lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class ID (e.g. CS 220):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class Section (e.g. 02):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|College (e.g. Baruch College):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Name of the Professor:&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Term (e.g. Fall 2025): &lt;br /&gt;
!&lt;br /&gt;
|}&lt;br /&gt;
Upon creation every research user account is provided with 50 GB (and max of 10000 files on /global/u) home directory mounted as &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;. If required, a user may request an increase in the size of their home directory. The HPC Center will endeavor to satisfy reasonable requests. If you expect to have more than 10 000 files please zip several small files into larger single zip file. Please keep only wrangled information in your space in order to optimize use of the existing storage. &lt;br /&gt;
&lt;br /&gt;
Student class accounts ( group d)) are ~~blue:provided~~  with 10 GB home directory.  Please note that class accounts and data will be deleted 30 days after the semester ends (unless otherwise agreed upon). Students are responsible for backing up their own data prior to the end of the semester.&lt;br /&gt;
 &lt;br /&gt;
When a user account is established, only the user has read/write to his files.  The user can change his UNIX permissions to allow others in his group to read/write to his file.&lt;br /&gt;
&lt;br /&gt;
Please be sure to notify the HPC Center if user accounts need to be removed or added to a specific re group. Please read below a policies for accounts. Note that accounts are not timeless and non accessed and non active accounts are removed ( see below).&lt;br /&gt;
&lt;br /&gt;
=== User accounts policies ===&lt;br /&gt;
CUNY HPCC applies strict security standards in user accounts management. HPCC uses “account periods”. Account period is &#039;&#039;&#039;one year&#039;&#039;&#039; for accounts in &#039;&#039;&#039;a,c,e, and f&#039;&#039;&#039; type of accounts and &#039;&#039;&#039;one semester&#039;&#039;&#039; for &#039;&#039;&#039;b&#039;&#039;&#039; and &#039;&#039;&#039;d&#039;&#039;&#039; type of accounts. All accounts are periodically reviewed and inactive accounts are removed. All student accounts will expire automatically and  will be removed after each semester unless student’s advisor request extension of the student’s account. All user accounts for groups &#039;&#039;&#039;a,c,e and f&#039;&#039;&#039; must be renewed once a year  by Sept 30th. All user accounts in groups &#039;&#039;&#039;b&#039;&#039;&#039; and &#039;&#039;&#039;d&#039;&#039;&#039; must be renewed within 2 weeks after each semester . Accounts not accessed for one account period  and/or not renewed are automatically disabled/locked and will be deleted 60 days from locking. Delete of particular account means unrecoverable remove of all data associated with that account.  &lt;br /&gt;
&lt;br /&gt;
===Reset Password ===&lt;br /&gt;
&lt;br /&gt;
Users must use automatic password reset system.  Click on [https://hpcauth1.csi.cuny.edu/reset/ Reset Password].  Upon resetting the users will get their individual security token on the e-mail address registered with HPCC.&lt;br /&gt;
&lt;br /&gt;
===Close of account===&lt;br /&gt;
If a user would like to close their account, please contact the CUNY at HPCHelp@csi.cuny.edu. &lt;br /&gt;
Supervisors that would like to modify the access of their researchers and/or students working for them should contact the HPC to remove, add or modify access.&lt;br /&gt;
User accounts that are not accessed or renewed for more than a year and one day will be purged along with any data associated with the account. User accounts that are not renewed on time will be locked and users must contact HPCC to get access recovered. &lt;br /&gt;
&lt;br /&gt;
=== Message of the day (MOTD) ===&lt;br /&gt;
Users are encouraged to read the &amp;quot;Message of the day” (MOTD), which is displayed to the user upon logging onto a system.  The MOTD provides information on scheduled maintenance time when systems will be unavailable and/or important changes in the environment that are of import to the user community.  The MOTD is the HPC Center’s only efficient mechanism for communicating to the broader user community as bulk email messages are often blocked by CUNY SPAM filters.&lt;br /&gt;
&lt;br /&gt;
===   Required citations ===&lt;br /&gt;
The CUNY HPC Center appreciates the support it has received from the National Science Foundation (NSF).  It is the policy of NSF that researchers who are funded by NSF or who make use of facilities funded by NSF acknowledge the contribution of NSF by including the following citation in their papers and presentations:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;This research was supported, in part, under National Science Foundation Grants: CNS-0958379, CNS-0855217, ACI-1126113 and OAC-2215760 (2022) and the City University of New York High Performance Computing Center at the College of Staten Island.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The HPC Center, therefore, requests its users to follow this procedure as it helps the Center to demonstrate that NSF’s investments aided the research and educational missions of the University.&lt;br /&gt;
&lt;br /&gt;
== Reporting requirements ==&lt;br /&gt;
The Center reports on its support of the research and educational community to both funding agencies and CUNY on an annual basis.  Citations are an important factor that is included in these reports.  Therefore, is mandatory users to send copies of research papers developed, in part, using the HPC Center resources to [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu]. Accounts for users who violate that requirement may not be renewed.  Reporting results obtained with HPC resources also helps the Center to keep abreast of user research requirement directions and needs. &lt;br /&gt;
&lt;br /&gt;
== Funding of computational resources and storage ==&lt;br /&gt;
Systems at HPC Center are purchased with grants from National Science Foundation (NSF), grants from NYC, a grant from DASNY and a grant from CUNY&#039;s office of the CIO. In addition all systems in condo tier are purchased with direct funds from research groups. The largest financial support comes from &#039;&#039;&#039;NSF MRI grants (more than 80% of all funding).&#039;&#039;&#039;  CUNY own investment  constitute &#039;&#039;&#039;8.6%&#039;&#039;&#039; of all funds.  Here is the list of all grants for CUNY-HPCC.  &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;PFSS and GPU Nodes:&#039;&#039;&#039; NSF Grant OAC-2215760 (operational) &lt;br /&gt;
:&#039;&#039;&#039;DSMS&#039;&#039;&#039; NSF Grant ACI-1126113 (server is partially retired) &lt;br /&gt;
:&#039;&#039;&#039;BLUE MOON&#039;&#039;&#039;, Grant NYC 042-ST030-015 (operational)&lt;br /&gt;
:&#039;&#039;&#039;CRYO,&#039;&#039;&#039; Grant DASNY 208684-000 OP (operational) &lt;br /&gt;
:&#039;&#039;&#039;ANDY&#039;&#039;&#039;, NSF Grant CNS-0855217 and the New York City Council through the efforts of Borough President James Oddo ( server is fully retired)&lt;br /&gt;
:&#039;&#039;&#039;APPEL&#039;&#039;&#039;, New York State Regional Economic Development Grant through the efforts of State Senator Diane Savino (operational)&lt;br /&gt;
:&#039;&#039;&#039;PENZIAS&#039;&#039;&#039;, The Office of the CUNY Chief Information Officer ( Server is partially retired)&lt;br /&gt;
:&#039;&#039;&#039;SALK&#039;&#039;&#039;, NSF Grant CNS-0958379 and a New York State Regional Economic Development Grant (Server is fully retired)&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=953</id>
		<title>Administrative Information</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=Administrative_Information&amp;diff=953"/>
		<updated>2026-03-05T18:49:38Z</updated>

		<summary type="html">&lt;p&gt;Alex: /* How to get an account */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
==How to get an account==&lt;br /&gt;
&lt;br /&gt;
=== Definitions and procedures ===&lt;br /&gt;
CUNY-HPCC operates on cost recovery scheme which requires all accounts to be associated with research project(s) or to be class accounts. The research accounts are sponsored by Principle Investigator (PI). A &#039;&#039;&#039;Principal Investigator (PI) at CUNY is the lead researcher responsible for the design, execution, and management of a research project, ensuring compliance with regulations and overseeing the project&#039;s financial aspects. PI is a faculty member or a qualified researcher who has the authority to apply for research funding and manage the project.&#039;&#039;&#039; CUNY-HPCC &lt;br /&gt;
&lt;br /&gt;
===Accounts overview===&lt;br /&gt;
All users of HPCC resources must register with HPCC for an account described in a table below. A user account is issued to an &#039;&#039;&#039;&#039;&#039;individual user&#039;&#039;&#039;.&#039;&#039; Accounts are &#039;&#039;&#039;not to be shared&#039;&#039;&#039;.  HPCC &#039;&#039;&#039;&amp;lt;u&amp;gt;will communicate only via CUNY e-mails. to users from groups A to E.&amp;lt;/u&amp;gt;&#039;&#039;&#039; HPCC will communicate with users from F, and G account types  via users&#039; verified work account, CC to the CUNY collaborator (for F only). In addition if resources are available and per discretion of CUNY-HPCC director, external to CUNY researchers can obtain external research account (type G) at CUNY-HPCC  by renting HPC resources and paying in advance the full cost recovery fee. Please contact the HPCC director for details. &lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!User accounts for:&lt;br /&gt;
!Type&lt;br /&gt;
!Conditions&lt;br /&gt;
!Renuwal cycles&lt;br /&gt;
!Conditions&lt;br /&gt;
!Mandatory conditions&lt;br /&gt;
|-&lt;br /&gt;
|Faculty, Research Staff&lt;br /&gt;
|A&lt;br /&gt;
|Renews every year at the beginning of Fall semester&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Adjunct Faculty &lt;br /&gt;
|B&lt;br /&gt;
|Renews every semester (Fall/Spring) &lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Doctoral Graduate Students&lt;br /&gt;
|C&lt;br /&gt;
|Renews every year the beginning of Fall semester&lt;br /&gt;
|14&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Master Students&lt;br /&gt;
|D&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.&lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Undergraduate Students&lt;br /&gt;
|E&lt;br /&gt;
|Renews every semester (Fall/Spring)&lt;br /&gt;
|8&lt;br /&gt;
|Non renewed accounts are disabled in 7 days from renewal date, data and home directory are removed after 15 days. No backup of any data. &lt;br /&gt;
|CUNY EID, Valid CUNY e-mail&lt;br /&gt;
|-&lt;br /&gt;
|Academic Collaborators&lt;br /&gt;
|F&lt;br /&gt;
|Renews once a year (Fall) for the duration of a project&lt;br /&gt;
|Unlimited&lt;br /&gt;
|Non renewed accounts are disabled in 15 days from renewal date, data in home directory is removed after 90 days. Backup data  have rollover time of 30 days.  &lt;br /&gt;
|other institution EID, work e-mail and valid CUNY collaborator e-mail&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Public and Private Sector Partners&lt;br /&gt;
|G&lt;br /&gt;
|No Renewal. Good only for the duration of contract. &lt;br /&gt;
|NA&lt;br /&gt;
|Account expires at the date of expiring of the contract. &lt;br /&gt;
|state/federal ID, verified work e-mail.&#039;&#039;&#039;Advanced pay of full cost for rented resource.&#039;&#039;&#039;&lt;br /&gt;
|}  &lt;br /&gt;
&lt;br /&gt;
Users who missed renewal for less than 90 days should communicate with HPCC via e-mail to &#039;&#039;&#039;hpchelp@csi.cuny.edu&#039;&#039;&#039; for account recovery. All users must inform HPCC for changes it their academic status. It is mandatory to specify information (or NA) on all points from below list. Please do not forget to provide information about past and pending &#039;&#039;&#039;&amp;lt;u&amp;gt;publications&amp;lt;/u&amp;gt;&#039;&#039;&#039; and funded projects and &amp;lt;u&amp;gt;information about your local available resources (local servers and workstations/desktops only).&amp;lt;/u&amp;gt;  Think carefully about the resources needed and try to estimate as accurate as possible.  Note that &#039;&#039;&#039;by applying for and obtaining an account, the user agrees to the HPCC End User Policy (EUP) and  Mandatory Security Requirements for Access  (MSRA).&#039;&#039;&#039;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable mw-collapsible&amp;quot;&lt;br /&gt;
|+Required Information for opening of  HPCC account. Please provide information in all fields and/or mark NA when needed. &lt;br /&gt;
!&lt;br /&gt;
! rowspan=&amp;quot;26&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;&amp;lt;big&amp;gt;For All CUNY Faculty, Staff And Graduate Students&amp;lt;/big&amp;gt;&#039;&#039;&#039;  &#039;&#039;&#039;&amp;lt;big&amp;gt;(A ,B,C,D)&amp;lt;/big&amp;gt;&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. John A. Smith  22341356 jsmith@csi.cuny.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|CUNY &#039;&#039;Academic status ( faculty, adjunct faculty, PhD student, MS student, research staff):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Primary&#039;&#039;&#039; Affiliation within CUNY - campus name and Department ( e.g Hunter College, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Secondary&#039;&#039;&#039; CUNY affiliation if any. Provide campus name and Department (e.g. Graduate Center, Biology):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Name, department and college affiliation of PI/Advisor (e.g. John Smith, Biology, Hunter College):&lt;br /&gt;
|-&lt;br /&gt;
|If out of College of Staten Island provide description of  local resources available. &lt;br /&gt;
:Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
::College (e.g. Hunter)&lt;br /&gt;
::Type of resource (e.g. Department cluster):&lt;br /&gt;
:::&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
:::&#039;&#039;-&#039;&#039; &#039;&#039;type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed for the project:&lt;br /&gt;
::- CPU Cores (e.g 1000):&lt;br /&gt;
::- GPU options (e.g 2 x V100/16 GB):&lt;br /&gt;
::- V100/16 GB -&lt;br /&gt;
::- V100/32 GB -  &lt;br /&gt;
::- L40/48 GB - &lt;br /&gt;
::- A30/24 GB -&lt;br /&gt;
::- A40/24 GB - &lt;br /&gt;
::- A100/40 GB - &lt;br /&gt;
::- A100/80 GB - &lt;br /&gt;
::- Storage Space (above 50 GB)&lt;br /&gt;
::- Backup of data (Y/N):&lt;br /&gt;
::- Archive of data (Y/N):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Title of the project:&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Short Description of the  project (up to 100 words): &lt;br /&gt;
|-&lt;br /&gt;
|Funding sources of the project (e.g. NSF grant #, CUNY):&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conference presentations, posters and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Number of refereed publications relevant to the project:&lt;br /&gt;
|-&lt;br /&gt;
|Pending publication relevant to the project: &lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;big&amp;gt;&#039;&#039;&#039;&#039;&#039;For All External (not CUNY) Project Collaborators and Researchers (F,G)&#039;&#039;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the state/federal ID or EID from other  Academic Institution (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Affiliation outside CUNY if any(e.g. Rutgers University) and valid professional e-mail: ( e.g. John Doe, Rutgers University, jd@rutgers.edu):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Department at NON CUNY  Academic Institution (e.g. MIS Rutgers):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Non CUNY-email (collaborator/external contact):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the collaborator (Academic: e.g. Professor; Partner: e.g. NVIDIA lab):&lt;br /&gt;
|-&lt;br /&gt;
|Status of the external researcher(s) (e.g. principal architect NVIDIA):&lt;br /&gt;
|-&lt;br /&gt;
|Resources needed (example: Cores 100; Time 10 000 hours Memory per core 8 GB, GPU cores  2 GPU hours 100 Storage 100GB):&lt;br /&gt;
|-&lt;br /&gt;
|Description of available local resources. Please state NONE if you do not have access to local computational resources. Otherwise provide:&lt;br /&gt;
&#039;&#039;type of computational  (cluster, advanced workstation):&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of nodes:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of cores per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;-- memory per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- total number of GPU for server:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- number of GPU per node:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;- type of GPU (list of all types e.e. 2 x K20m, 4 x K80):&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Consent that HPCC will be cited properly (see our wiki for details) in all your published work &amp;lt;u&amp;gt;including conferences and talks.&amp;lt;/u&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&amp;lt;big&amp;gt;&#039;&#039;&#039;For All CUNY Graduate and Undergraduate Classes (E)&#039;&#039;&#039;&amp;lt;/big&amp;gt;&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;Full name as stated at the CUNY ID card (e.g. John Samuel Doe):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;CUNY EID and valid CUNY e-mail (e.g. 22341356):&#039;&#039;&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Valid CUNY e-mail.&#039;&#039;&#039; Public emails are not accepted (e.g. azho@cix.csi.cuny.edu):  &lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class ID (e.g. CS 220):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Class Section (e.g. 02):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|College (e.g. Baruch College):&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Name of the Professor:&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|Term (e.g. Fall 2025): &lt;br /&gt;
!&lt;br /&gt;
|}&lt;br /&gt;
Upon creation every research user account is provided with 50 GB (and max of 10000 files on /global/u) home directory mounted as &amp;lt;font face=&amp;quot;courier&amp;quot;&amp;gt;/global/u/&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;userid&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;/font&amp;gt;. If required, a user may request an increase in the size of their home directory. The HPC Center will endeavor to satisfy reasonable requests. If you expect to have more than 10 000 files please zip several small files into larger single zip file. Please keep only wrangled information in your space in order to optimize use of the existing storage. &lt;br /&gt;
&lt;br /&gt;
Student class accounts ( group d)) are ~~blue:provided~~  with 10 GB home directory.  Please note that class accounts and data will be deleted 30 days after the semester ends (unless otherwise agreed upon). Students are responsible for backing up their own data prior to the end of the semester.&lt;br /&gt;
 &lt;br /&gt;
When a user account is established, only the user has read/write to his files.  The user can change his UNIX permissions to allow others in his group to read/write to his file.&lt;br /&gt;
&lt;br /&gt;
Please be sure to notify the HPC Center if user accounts need to be removed or added to a specific re group. Please read below a policies for accounts. Note that accounts are not timeless and non accessed and non active accounts are removed ( see below).&lt;br /&gt;
&lt;br /&gt;
=== User accounts policies ===&lt;br /&gt;
CUNY HPCC applies strict security standards in user accounts management. HPCC uses “account periods”. Account period is &#039;&#039;&#039;one year&#039;&#039;&#039; for accounts in &#039;&#039;&#039;a,c,e, and f&#039;&#039;&#039; type of accounts and &#039;&#039;&#039;one semester&#039;&#039;&#039; for &#039;&#039;&#039;b&#039;&#039;&#039; and &#039;&#039;&#039;d&#039;&#039;&#039; type of accounts. All accounts are periodically reviewed and inactive accounts are removed. All student accounts will expire automatically and  will be removed after each semester unless student’s advisor request extension of the student’s account. All user accounts for groups &#039;&#039;&#039;a,c,e and f&#039;&#039;&#039; must be renewed once a year  by Sept 30th. All user accounts in groups &#039;&#039;&#039;b&#039;&#039;&#039; and &#039;&#039;&#039;d&#039;&#039;&#039; must be renewed within 2 weeks after each semester . Accounts not accessed for one account period  and/or not renewed are automatically disabled/locked and will be deleted 60 days from locking. Delete of particular account means unrecoverable remove of all data associated with that account.  &lt;br /&gt;
&lt;br /&gt;
===Reset Password ===&lt;br /&gt;
&lt;br /&gt;
Users must use automatic password reset system.  Click on [https://hpcauth1.csi.cuny.edu/reset/ Reset Password].  Upon resetting the users will get their individual security token on the e-mail address registered with HPCC.&lt;br /&gt;
&lt;br /&gt;
===Close of account===&lt;br /&gt;
If a user would like to close their account, please contact the CUNY at HPCHelp@csi.cuny.edu. &lt;br /&gt;
Supervisors that would like to modify the access of their researchers and/or students working for them should contact the HPC to remove, add or modify access.&lt;br /&gt;
User accounts that are not accessed or renewed for more than a year and one day will be purged along with any data associated with the account. User accounts that are not renewed on time will be locked and users must contact HPCC to get access recovered. &lt;br /&gt;
&lt;br /&gt;
=== Message of the day (MOTD) ===&lt;br /&gt;
Users are encouraged to read the &amp;quot;Message of the day” (MOTD), which is displayed to the user upon logging onto a system.  The MOTD provides information on scheduled maintenance time when systems will be unavailable and/or important changes in the environment that are of import to the user community.  The MOTD is the HPC Center’s only efficient mechanism for communicating to the broader user community as bulk email messages are often blocked by CUNY SPAM filters.&lt;br /&gt;
&lt;br /&gt;
===   Required citations ===&lt;br /&gt;
The CUNY HPC Center appreciates the support it has received from the National Science Foundation (NSF).  It is the policy of NSF that researchers who are funded by NSF or who make use of facilities funded by NSF acknowledge the contribution of NSF by including the following citation in their papers and presentations:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;This research was supported, in part, under National Science Foundation Grants: CNS-0958379, CNS-0855217, ACI-1126113 and OAC-2215760 (2022) and the City University of New York High Performance Computing Center at the College of Staten Island.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The HPC Center, therefore, requests its users to follow this procedure as it helps the Center to demonstrate that NSF’s investments aided the research and educational missions of the University.&lt;br /&gt;
&lt;br /&gt;
== Reporting requirements ==&lt;br /&gt;
The Center reports on its support of the research and educational community to both funding agencies and CUNY on an annual basis.  Citations are an important factor that is included in these reports.  Therefore, is mandatory users to send copies of research papers developed, in part, using the HPC Center resources to [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu]. Accounts for users who violate that requirement may not be renewed.  Reporting results obtained with HPC resources also helps the Center to keep abreast of user research requirement directions and needs. &lt;br /&gt;
&lt;br /&gt;
== Funding of computational resources and storage ==&lt;br /&gt;
Systems at HPC Center are purchased with grants from National Science Foundation (NSF), grants from NYC, a grant from DASNY and a grant from CUNY&#039;s office of the CIO. In addition all systems in condo tier are purchased with direct funds from research groups. The largest financial support comes from &#039;&#039;&#039;NSF MRI grants (more than 80% of all funding).&#039;&#039;&#039;  CUNY own investment  constitute &#039;&#039;&#039;8.6%&#039;&#039;&#039; of all funds.  Here is the list of all grants for CUNY-HPCC.  &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;PFSS and GPU Nodes:&#039;&#039;&#039; NSF Grant OAC-2215760 (operational) &lt;br /&gt;
:&#039;&#039;&#039;DSMS&#039;&#039;&#039; NSF Grant ACI-1126113 (server is partially retired) &lt;br /&gt;
:&#039;&#039;&#039;BLUE MOON&#039;&#039;&#039;, Grant NYC 042-ST030-015 (operational)&lt;br /&gt;
:&#039;&#039;&#039;CRYO,&#039;&#039;&#039; Grant DASNY 208684-000 OP (operational) &lt;br /&gt;
:&#039;&#039;&#039;ANDY&#039;&#039;&#039;, NSF Grant CNS-0855217 and the New York City Council through the efforts of Borough President James Oddo ( server is fully retired)&lt;br /&gt;
:&#039;&#039;&#039;APPEL&#039;&#039;&#039;, New York State Regional Economic Development Grant through the efforts of State Senator Diane Savino (operational)&lt;br /&gt;
:&#039;&#039;&#039;PENZIAS&#039;&#039;&#039;, The Office of the CUNY Chief Information Officer ( Server is partially retired)&lt;br /&gt;
:&#039;&#039;&#039;SALK&#039;&#039;&#039;, NSF Grant CNS-0958379 and a New York State Regional Economic Development Grant (Server is fully retired)&lt;/div&gt;</summary>
		<author><name>Alex</name></author>
	</entry>
</feed>