Main Page: Difference between revisions

From HPCC Wiki
Jump to navigation Jump to search
 
(272 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[File:CUNY-HPCC-HEADER-LOGO.jpg]]
__TOC__
__TOC__


[[Image:hpcc-panorama3.png]]
[[Image:hpcc-panorama3.png]]


The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314.  The CUNY-HPCC supports computational research and computational intensive courses on graduate and undergraduate level offered at all CUNY colleges in fields such as Computer Science, Engineering, Bioinformatics, Chemistry, Physics,Materials Science, Genetics, Computational Biology, Finance and others.  HPCC  provides educational outreach to local schools and supports undergraduates who work in the research programs of the host institution (e.g. REU program from NSF). The primary mission of HPCC is:  
The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the
campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314.  HPCC
goals are to:  


* To enable advanced research and scholarship at CUNY colleges by providing faculty, staff, and students with access to high-performance computing, adequate storage resources and visualization resources;
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.
* To enable advanced education and cross disciplinary  education by providing flexible and scalable resources;
:*Create opportunities for the CUNY research community to develop new partnerships with the government and private sectors; and
* To provide CUNY faculty and their collaborators at other universities, CUNY research staff and CUNY graduate and undergraduate students with expertise in scientific computing, parallel scientific computing (HPC), software development, advanced data analytics, data driven science and simulation science, visualization, advanced database engineering, and others.
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.
* Leverage the HPC Center capabilities to acquire additional research resources for CUNY faculty, researchers and students in existing and major new programs.
* Create opportunities for the CUNY research community to win grants from national funding institutions and to develop new partnerships with the government and private sectors.
CUNY-HPPC is voting member of '''Coalition for Academic Scientific Computation (CASC)'''. Originally formed in the 1990s as a small group of the heads of national supercomputing centers, CASC expanded to more than 100 member institutions representing many of the nation’s most forward-thinking universities and computing centers. CASC includes the leadership of large academic computing centers such as TACC or San Diego SC and recently attracts a greater diversity of smaller institutions such as non-R1s, HBCUs, HSIs, TCUs, etc. CASC’s mission as to be  “''dedicated to advocating for the use of the most advanced computing technology to accelerate scientific discovery for national competitiveness, global security, and economic success, as well as develop a diverse and well-prepared 21st century workforce.”''


== CUNY-HPCC - Democratization of Research ==
==Organization of systems and data storage (architecture)==
In last few years the model of cloud computing (called also computing-on-demand) made the promise that anyone, no matter where the user is, could leverage almost unlimited computing resources. This computing supposed to “democratize” research and level the playing field, as it were. Unfortunately that is not entirely true (for now) because the cloud computing even available to nearly anyone, from nearly anywhere remains comparatively expensive to local resources and lacks the flexibility and accessibility of local support tailored to education and research offered by the local research HPCC. Indeed, every computational environment has limitations and a learning curve such that students and faculty coming from variety of  backgrounds and having might feel crushed and helpless without  close and personalized local support. in this sense the carefully designed, user centered, academically focused  CUNY-HPCC has the transformative capability for rapidly evolving computation and data-driven research, and creates opportunities for vast collaboration and convergence research activities and thus provides the real democratization of the research. 


== Pedagogical value of CUNY- HPCC ==
All user data and project data are kept on Data Storage and Management System ('''DSMS''') which is mounted only on login node(s) of all servers. Consequently, no jobs can be started directly from '''DSMS''' storage. Instead, all jobs must be submitted  from  a separate (fast but small) '''/scratch''' file system mounted on all computational nodes and on all login nodes. As the name suggests, the '''/scratch''' file system is not  home directory  for accounts nor can be used for long term data preservation.  Users must use "staging" procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.  
CUNY-HPPC supports whole variety of classes on graduate and undergraduate level from all CUNY-colleges, Graduate Center and institutes. It is important to mention that CUNY-HPCC impact goes beyond the STEM disciplines. Thus the CUNY-HPCC:
[[File:NAnoBio6.jpg|right|frameless|Dr Alexander Tzanov, the director of CUNY-HPCC speaks on NanoBioNYC workshop]]
* '''Allows to conduct analysis of datasets that are too large to work with easily on personal devices, or that cannot easily be shared or disseminated.''' These data sets are not coming only from STEM fields (i.e. finance, economics, linguistics etc.). Facilitating these analyses provides the students with opportunity to interact in real time with  increasingly large amounts of data, enabling them to gather important skills and experiences.
* '''CUNY-HPCC provides collaborative space for entire courses. T'''he multi-user capabilities and environment of HPCC facilitates collaborative work among learners and promotes more complex closer to reality learning problems.
* Large computational and visualization capabilities of '''HPCC is enabler for applying analytical techniques too large for personal devices.''' Students can  run unattended parameter sweeps or workflows in order to explore the problem in detail. That self exploration has proven positive effect on learning
* '''Use of CUNY-HPCC resources provides students with needed pre request skills and knowledge''' they may need later when explore larger HPC environments. For instance the CUNY-HPCC workflow and environment is extremely close to the environment of other research centers of ACCESS resources.
* '''CUNY-HPCC participates in educational programs such as NSF funded NanoBioNYC Ph.D. traineeship program at  CUNY.''' This program is focused on developing groundbreaking bio-nanoscience solutions to address urgent human and planetary health issues and preparing students to become tomorrow’s leaders in diverse STEM careers.


==Research Computing Infrastructure==
Upon  registering with HPCC every user will get 2 directories:
[[File:HPCC structure last.png|thumb|682x682px|                                                                                                                                    '''<big>                                                          The Organization of HPCC resources</big>''']]
The research computing infrastructure is depicted in the figure on right. In order to support various types of research projects the CUNY-HPCC support variety of computational architectures.  All computational resources are organized in '''3''' tiers - '''''Condominium Tier (CT), Free Tier''''' (FT) and '''''Advanced Tier (AT) plus visualization (Vz).'''''  All nodes in all tiers are attached to central file system '''HPFS''' which provides  '''/scratch''' and  Global Storage ('''GS''') - '''/global/u/.'''   


=== Storage systems ===
:• '''<font face="courier">/scratch/<font color="red"><userid></font color></font>''' – this is temporary workspace on the HPC systems
The '''/scratch''' file system mentioned above is small fast file system mounted on all nodes. The file system resides on solid state drives only so it is fast. This file system has capacity of 256GB. Not that files on that file system are '''not backup-ed and are not protected.''' The file system <u>does not have quota</u> so users can submit large jobs. The file system is automatically purged if : '''1.''' the load of the file system exceeds 70% or '''2.''' file(s) are not accessed for <u>60 days whatever comes first.</u> The partition '''/global/u''' in main HPFS file system holds user home directories.  The HPFS is the hybrid file system and combines SSD and HDD (solid state and hard disks) with capabilities for dynamic relocation of files. The capacity is 2 Peta Bytes (PB). This file system, was purchased under NSF grant OAC-2215760. That file system is mounted on all  nodes. 
:• '''<font face="courier">/global/u/<font color="red"><userid></font color></font>''' – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data


=== Computational and Visualization Resources ===
:• In some instances a user will also have use of disk space on the DSMS in '''<font face="courier">/cunyZone/home/<font color="red"><projectid></font color></font>''' (IRods).
The  computrational resources in 3 tiers mentioned above are combined within ARROW hybrid cluster. In addition the HPCC operates specialized visualization server which shares the file system with all nodes. That allow to conduct i<u>n-situ  visualizations</u> of simulations. The description in nodes is given in a table below. Note that '''black''' denotes '''basic''' tier, '''blue''' denotes '''advanced''' tier and '''orange''' denotes '''condo''' tier. The '''yellow''' marks the '''vizualization''' tier. 
[[File:HPCC_structure.png|center|frameless|900x900px]]
The '''/global/u/<userid>''' directory has quota (see below for details) while  the '''/scratch/<userid>''' do not have. However the '''/scratch''' space is cleaned up  following the rules described below. There are no guarantees of any kind that files in '''/scratch''' will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called '<nowiki/>'''''chizen'''''. The Data Transfer Node called '''Cea''' allows file transfer from/to remote sites directly to/from '''<font face="courier">/global/u/<font color="red"><userid></font></font>''' or to/from  '''<font face="courier">/scratch/<font color="red"><userid></font></font>'''       
[[File:Arrow Viz Resources.png|frameless|Computational And Visualization Resources|911x911px|left]]
====Run jobs on HPCC resources ====
Despite of a tier all jobs at HPCC must:


~~red: >>~~ Start from user's directory on '''scratch''' file system '''- /scratch/<userid>''' . Jobs cannot be started from users home directories - '''/global/u/<userid>'''
==HPC systems==


~~red: >>~~ Use SLURM job submission system (job scheduler) '''.''' All jobs submission scripts written for other job scheduler(s) (i.e. PBS pro) must be converted to SLURM syntax.
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.  


~~red: >>~~ All jobs in all tiers <u>must start from Master Hear Node (MHN)</u>. The jobs are distributed automatically to different tiers according to job submission policies and job(s) requirements. Users do not need to communicate directly to any of the servers. In near future the process of submission of jobs will be improved further with launch of HPC job submission portal.
''Overview of Computational architectures'':


All usefull users' data must be kept in user's home directory  on '''/global/u/<userid>.'''  Consequently, jobs can be started only from '''/scratch''' directory and '''never from'''  '''/global/u/<userid> directory'''.  It is important to remember that '''/scratch''' is not main storage for users' accounts (home directories), but a <u>temporary storage used for job submission only.</u> Thus:
'''SMP''' servers have several processors (working under a single operating system) which "share everything". Thus all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.
#data in '''/scratch''' are not protected, preserved or backup-ed and can be lost at any time. CUNY-HPCC has no obligation to preserve user data in '''/scratch.'''
#'''/scratch''' undergoes regular and automatic file purging when either or both conditions are satisfied:
##load of the '''/scratch''' file system reaches '''70+%.'''
##there is/are inactive file(s) older than 60 days.


Upon registering with HPCC every user will get 2 directories:
'''Cluster''' is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect.  Each SMP member of the cluster is called a '''node'''. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  


:• '''<font face="courier">/scratch/<font color="red"><userid></font></font>''' – this is temporary workspace on the HPC systems
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called '''Arrow'''.  Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster '''Herbert''' dedicated only to education.  
:• '''<font face="courier">/global/u/<font color="red"><userid></font></font>''' – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data
:• In some instances a user will also have use of disk space on the DSMS in '''<font face="courier">/cunyZone/home/<font color="red"><projectid></font></font>''' (IRods).
The '''/global/u/<userid>''' directory has quota (see below for details) while  the '''/scratch/<userid>''' do not have. However the '''/scratch''' space is cleaned up  following the rules described below. There are no guarantees of any kind that files in '''/scratch'''  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called '<nowiki/>'''''chizen'''''.  The Data Transfer Node called '''Cea''' allows file transfer from/to remote sites directly to/from '''<font face="courier">/global/u/<font color="red"><userid></font></font>''' or to/from  '''<font face="courier">/scratch/<font color="red"><userid></font></font>'''
The computational architectures participating in each tier are discussed below. All tiers access 2 separate file systems: 1) Global Storage (GS)  (a global file system) and 2) '''/scratch f'''ile system. The '''GS''' is mounted only on login nodes and is proposed to keep user data (home directories) and project data (project directories). The '''/scratch''' file system is mounted on login nodes and on all computational nodes (in all tiers). Thus '''GS''' holds long lasting user data (executables, scripts and data), while '''/scratch i'''s used to hold provisional data required by particular simulation(s).  Consequently, jobs can be started only from '''/scratch''' and '''never from'''  '''GS''' storage.  It is important to remember that '''/scratch''' is not main storage for users' accounts (home directories), but a temporary storage used for job submission only. Thus:


#data in '''/scratch''' are not protected, preserved or backup-ed and can be lost at any time. CUNY-HPCC has no obligation to preserve user data in '''/scratch.'''
'''Distributed shared memory''' computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP. Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the '''NUMA''' systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the '''NUMA''' node at Arrow named '''Appel'''.  This node does not have GPU.  
#'''/scratch''' undergoes regular file purging when either or both conditions are satisfied:
##load of the '''/scratch''' file system reaches 71%.
##there is/are inactive file(s) older than 60 days.
# only data in '''GS''' are protected and recoverable.


Upon  registering with HPCC every user will get 2 directories:
'' Infrastructure systems'':


:• '''<font face="courier">/scratch/<font color="red"><userid></font></font>''' – this is temporary workspace on the HPC systems
o Master Head Node ('''MHN/Arrow)''' is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.   
: • '''<font face="courier">/global/u/<font color="red"><userid></font></font>''' – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data
:• In some instances a user will also have use of disk space on the DSMS in '''<font face="courier">/cunyZone/home/<font color="red"><projectid></font></font>''' (IRods).
The '''/global/u/<userid>''' directory has quota (see below for details) while  the '''/scratch/<userid>''' do not have. However the '''/scratch''' space is cleaned up  following the rules described below. There are no guarantees of any kind that files in '''/scratch'''  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called ''''''chizen'''''The Data Transfer Node called '''Cea''' allows file transfer from/to remote sites directly to/from '''<font face="courier">/global/u/<font color="red"><userid></font></font>''' or to/from  '''<font face="courier">/scratch/<font color="red"><userid></font></font>'''


o '''Chizen''' is a redundant gateway server which provides access to protected HPCC domain.


===<u>Condominium Tier</u>===
o '''Cea''' is a file transfer node allowing transfer of files between users’ computers to/from /scratch space or to/from /global/u/<usarid>. '''Cea''' is accessible directly (not only via '''Chizen'''), but allows only limited set of shell commands.   
Condominium tier (called '''condo''') organizes resources purchased and owned by faculty, but maintained by HPCC. The participation in this tier is '''strictly voluntary'''. Several faculty/research groups can combine finds to purchase and consequently share the hardware (a node or several nodes). All nodes in this tier must meet certain hardware specifications including to be fully warranted for time life of the node(s) in order to be accepted. If you want to participate in condominium please sent a request mail to hpchelp@csi.cuny.edu and consult HPCC before making a purchaseCondominium tier:


* Promotes vertical and horizontal collaboration between research groups;
'''Table 1''' below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.
* Makes possible to utilize small amounts of research money or "left-over" money wisely and to obtain advanced resources;
*Helps researchers to conduct large scope high quality research including collaborative projects leading to successful grants with high impact.


====Access to condo resources====
{| class="wikitable"
The resources are available only for condo owners and their groups. The users registered with condo must use the main login node of Arrow server. To access their own node the condo users must  specify their own private partition. In addition there are partitions which operate over two or more nodes owned by condo members.  Condo users may us  (private partition) to access their node. he condo tier members benefit from professional support from HPCC security and maintenance. Upon approval (from the node owner)  any idle node(s) can be used by any other member(s). For instance a member can borrow (for agreed time) node with more advanced GPUs than those installed on his/her own node(s). The owners of the equipment are responsible for any repair costs for their node(s). Other users may rent any of the described below condo resources if agreed with owners. The unused cycles can be shared with other members of the community.
|+
 
!Master Head Node
In sum the benefits of condo are:   
!Sub System
 
!Tier
*'''5 year lifecycle''' - condo resources will be available for a duration of 5 years.
!Type
'''Access to more cpu cores than purchased''' and access to resources which are not purchased.
!Type of Jobs
*'''Support''' - HPCC staff will install, upgrade, secure and maintain condo hardware throughout its lifecycle.
!Nodes
*'''Access to main application server'''
!CPU Cores
*'''Access to HPC analytics'''
!GPUs
Responsibilities of condo memnbers
!Mem/node
 
!Mem/core
*'''To share their resources''' (when idle or partially available) with other members of a condo;
!Chip Type
*'''To include in their research and instrumentation grants money''' for computing used to cover operational expences (non-tax-levy expenses) of the HPCC.
!GPU Type and Interface
The table below summarized the available resources
{| class="wikitable sortable"
|+Resources in Condominium Tier (Arrow cluster)
!Number of nodes
!Cores/node
!Chip
!Memory/node
!GPU/node
!Interconnect
!Use
!Private partition
|-
|-
| rowspan="17" |'''<big>Arrow</big>'''
| rowspan="4" |Penzias
| rowspan="10" |Advanced
| rowspan="4" |Hybrid Cluster
|Sequential & Parallel jobs w/wo GPU
|66
|16
|2
|2
|64
|64 GB
|2 x AMD EPYC
|4 GB
| 256 GB
|SB, EP 2.20 GHz
|2 x A30 24 GB, PCIe gen 3
|K20m GPU, PCIe v2
|100 Gbps Infiniband EDR
|-
| Number Crunching
| rowspan="3" |Sequential & Parallel jobs
|parchem, partasrc
| rowspan="3" |1
|24
| -
|1500 GB
|62 GB
| rowspan="3" |HL, 2.30 GHz
| -
|-
|36
| -
|768 GB
|21 GB
| -
|-
|24
| -
|768 GB
|32 GB
| -
|-
|Appel
|NUMA
|Massive Parallel, sequential, OpenMP
|1
|384
| -
|11 TB
|28 GB
|IB, 3 GHz
| -
|-
|-
|Cryo
|SMP
|Sequential and Parallel jobs, with GPU
|1
|1
|64
|40
|2 x AMD EPYC
|8
|512 GB
|1500 GB
| --
|37 GB
|100 Gbps infiniband EDR
|SL, 2.40 GHz
|Number Crunching
|V100 (32GB) GPU, SXM
 
|-
|parthphys
| rowspan="2" |Blue Moon
| rowspan="2" |Hybrid Cluster
| rowspan="2" |Sequential and Parallel jobs w/wo GPU
|24
|32
| -
| rowspan="2" |192 GB
| rowspan="2" |6 GB
| rowspan="2" |SL, 2.10 GHz
| -
|-
|-
|2
|2
|128
|32
|2 x AMD EPYC
|2
|512 GB
|V100(16GB) GPU, PCIe
|2 x A40 48 GB, PCIe gen 3
|100 Gbps Infiniband EDR
|Number Crunching
|parting, partmath
|-
|-
|Karle
|SMP
|Visualization, MATLAB/Mathematica
|1
|1
|128
|36*
|2 x AMD EPYC
| -
|512 GB  
|768 GB
|8 x A100 40GB, SXM
|21 GB
|200 Gbps Infiniband HDR
|HL, 2.30 GHz
|Number Crunching
| -
|parched
|-
|}
|Chizen
 
|Gateway
==='''<u>Advanced Tier</u>'''===
|No jobs allowed
The advanced tier holds the resources used for more advanced or large scale research. This tier provides nodes with Volta class GPUs with 16 GB and 32 GB on board. The table below summarizes the resources.
| colspan="7" | -
{| class="wikitable sortable"
|+Resources in Advanced Tier (Blue Moon, Cryo, Appel):
1256 cores and 12 Volta class GPU
!Number of Nodes
!Cores/node
!Chip
!Memory/node
!GPU/node
!Interconnect
!Use
!Association
|-
|-
| rowspan="2" |CFD
| rowspan="2" |Condo
| rowspan="2" |SMP
| rowspan="7" |Parallel, Seq, OpenMP
|1
|48
|2
|2
|32
|768 GB
|2 x Intel X86_64
|
|192 GB
|EM, 4.8 GHz
|2 x V100 (16 GB) PCIe gen 3
|A40, PCIe, v4
|100 Gbps Infiniband EDR
|-
|Number Crunching
|1
|Blue Moon Cluster
|48
| -
|512 GB
|
|ER, 4.3 GHz
| -
|-
|-
|24
| rowspan="2" |PHYS
|32
| rowspan="2" |Condo
|2 x Intel X86_64
| rowspan="2" |SMP
|192 GB
|1
| --
|48
|100 Gbps Infiniband EDR
|2
|Number Crunching
|640 GB
|Blue Moon Cluster
|
|ER, 4 GHz
|L40, PCIe, v4
|-
|-
|1
|1
|40
|48
|2 x Intel X86_64
| -
|1500 GB
|512 GB
|8 x V100 (32 GB) SXM
|
|100 Gbps Infiniband EDR
|ER, 4.3 GHz
|Number Crunching
| -
|Cryo
 
|-
|-
| rowspan="2" |CHEM
| rowspan="2" |Condo
| rowspan="2" |SMP
|1
|1
|384
|48
|2 x Intel X86_64
|2
|11,000 GB
|256 GB
| --
|
|56 Gbps Infiniband  QDR
|EM, 2.8 GHz
| Number Crunching
|A30, PCIe, v4
 
|Appel
|}
 
===<u>Basic tier</u>===
The basic tier provides resources for sequential and moderate size parallel jobs. The openMP jobs can be run only in a scope of single node. The distributed parallel jobs (MPI) can be run across cluster.  This tier alsop support Matlab Parallel server which can be run across nodes. The users also can run GPU enabled jobs since this tier has 132 GPU Tesla K20m. Please note that these GPU are not supported by NVIDIA anymore. Many applications also may not support this GPU as well. The table below summaries the resources for their tier.
{| class="wikitable sortable mw-collapsible"
|+Resources in Free Tier (Penzias, Karle):1056 cores and 132 Tesla class GPU
!Number of nodes
!Cores/node
!Chip
!Memory/node
!GPU/node
!Interconnect
!Use
!Association
 
|-
|-
|66
|1
|16
|128
|2 x Intel X86_64
|8
|64 GB
|512 GB
| 2 x K20m, PCIe.2
|
|56 Gbps Infiniband
|ER, 2.0 GHz
|Number crunching, Sequential jobs, General computing, Distributed Parallel Matlab
|A100/40, SXM
|Penzias
 
|-
|-
|ASRC
|Condo
|SMP
|1
|1
|36/72*
|48
|2 x Intel X86_64
|2
|768 GB
|256 GB
| --
|
|56 Gbps Infiniband
|ER, 2.8 GHz
 
|A30, PCIe, v4
|Visualization, Matlab, Parallel Matlab (toolbox)
|Karle
|}
|}
<nowiki>*</nowiki> Hyperthread
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA 
 
===<u>Arrow cluster and hybrid storage (NSF grant 2023 equipment)</u>===
This equipment consist of large hybrid parallel file system and 2 computational nodes integrated in a cluster named Arrow. The file system has capacity of 2PB (Petabytes) and bandwidth of 35 GBps write and 50 GBps read. The computational nodes details are summarized in table below. 
{| class="wikitable sortable mw-collapsible"
|+Resources in NSF grant equipment (Arrow): Total of 256 cores and 16 GPU Ampere A100/80GB GPU
!Number of Nodes
!Cores/node
!GPU/node
!Memory/node
!Chip
!Interconnect
!Use
!Association


== Recovery of  operational costs ==
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only '''<u>operational costs with no profit for HPCC</u>'''. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on '''<u>unit-hour</u>'''. The unit can be either CPU  unit or GPU unit. The definitions of these is given in a table below:
{| class="wikitable mw-collapsible"
|+Definitions of unit-hour
!Type of resource
!Unit-hour
!For V100, A30, A40 or L40
!For A100
|-
|CPU unit
|1 cpu core/hour
| --
| --
|-
|-
|2
|GPU unit
|128
|(4 cpu cores + 1 GPU thread )/hour
|8 x A100/80GB
|4 cpu cores + 1 GPU
|1024 GB
|4 cpu cores and 1/7 A100
| 2 x AMD EPYC
|HDR 100 Gbps
|Molecular modeling, Data science, Number Crunching, Materials Science, AI, ML
| Arrow
|}
|}


=== HPCC access plans  ===
a.     '''Minimum access (MAP):'''


==HPC systems and their architectures==
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.


The HPC Center operates variety of architectures in order to support complex and demanding workflows.  The deployed systems include: distributed memory (also referred to as “cluster”) computers, symmetric multiprocessor (also referred as SMP) and shared memory (also reffred as NUMA machines). 
The MAP has 3 tiers:


''Computational Systems'':
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges.


'''SMP''' servers have several processors (working under a single operating system) which "share everything". Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU. Currently, HPCC operates several detached SMP servers named '''Math, Cryo ''' and '''Karle'''. Karle is a server which does not have GPU and is used for visualizations, visual analytics  and interactive MATLAB/Mathematica jobs. '''Math''' is a condominium server without GPU as well. Cryo (CPU+GPU server) is  specialized server with  eight (8) NVIDIA V100 (32G) GPU designed to support large scale multi-core multi-GPU jobs.  
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  


'''Cluster''' is defined as a single system comprizing a  set of SMP servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. Each SMP member of the cluster is called a '''node'''. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU. The main cluster at HPCC is a hybrid (CPU+GPU) cluster called '''Penzias'''.  Sixty six (66) of Penzias nodes have 2 x GPU K20m, and the 3 fat nodes (nodes with large number of CPU-cores and memory) of the cluster do not have GPU.   In addition HPCC operates the cluster '''Herbert''' dedicated only to education.
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  


'''Distributed shared memory''' computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP. Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture. Similarly to SMP, the '''NUMA''' systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the '''NUMA''' server called '''Appel'''.  This server does not have GPU.
The MAP users get charged per CPU/GPU hour at low rate of '''<u>$0.015 per cpu hour and $0.09 per GPU hour</u>.'''  


======'' Infrastructure systems'':======
{| class="wikitable mw-collapsible"
o Master Head Node ('''MHN''') is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus.  
|+Cost recovery fees for MAP users
|Job
|Cpu cores
|GPU
|Cost/hour
|-
|1 core no GPU
|1
|0
|$0.015/hour
|-
|16 cores no GPU
|4
|0
|$0.24/hour
|-
| 4 cores + 1 GPU
|4
|1
|$0.15/hour
|-
|16 cores + 1 GPU
|16
|1  
|$0.33/hour
|-
|16 cores + 2 GPU
|16
|2  
|$0.42/hour
|-
|32 cores + 2 GPU
|32
|2  
|$0.66/hour
|-
|40 cores + 8 GPU
|40
|8
|$1.32/hour
|}


o '''Chizen''' is a redundant gateway server which provides access to protected HPCC domain.


o '''Cea''' is a file transfer node allowing transfer of files between users’ computers to/from /scratch space or to/from /global/u/<usarid>. '''Cea''' is accessible directly (not only via '''Chizen'''), but allows only limited set of shell commands.
b.     '''Computing on demand (CODP)'''
 
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are '''$0.018 per cpu hour and $0.11 per GPU hour.'''  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:


'''Table 1''' below provides a quick summary of the attributes of each of the systems available at the HPC Center.
{| class="wikitable mw-collapsible"
{| class="wikitable sortable"
|+Cost recovery fees for CODP plan
|+CUNY-HPCC resources
|Job
!System
|Cpu cores
!Type
|GPU
!Type of Jobs
|Cost/hour
!Number of Nodes
|-
!Cores/node
|1 core no GPU
!Chip Type
|1
!GPU/node
|0
!Memory/node (GB)
|$0.018/hour
|-
|16 cores no GPU
|16
|0
|$0.288/hour
|-
|4 cores + 1 GPU
|4
|1
|$0.293/hour
|-
|-
|Penzias
|16 cores + 1 GPU
|Hybrid Cluster
|Sequential and parallel jobs with/without GPU  
|66
|16
|16
 
|1  
|2 x Intel Sandy Bridge EP 2.20 GHz
|$0.334/hour
|2 x K20M (5 GB on board), PCIe 2.0
|-
|64 GB
|32 cores + 1 GPU
|32
|1  
|$0.666/hour
|-
|-
|Blue Moon
|32 cores + 2 GPU  
|Hybrid cluster
|Sequential and parallel jobs with/without GPU
|24 (CPU)  & 2 (CPU + GPU)
|32
|32
|2
|$0.756/hour
|}


|2 x Intel Skylake 2.10 GHz


|2 x V100 (16 GB on board), PCIe 3.0
c.  '''Leasing node(s) (LNP)'''
|192 GB
|-
|Cryo


|SMP
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.  
|Sequential and parallel jobs with/without GPU
|1
|40
|2 x Intel Skylake 2.10 GHz


| 8 x V100 (32 GB on board), XSM
{| class="wikitable mw-collapsible"
|1,500 GB
|+Lease node(s) fees for MAP users
|Job (MAP users)
|Cpu cores
|GPU
|Cost/30 days
|-
|-
|Appel
|1 core no GPU
|NUMA
|Massive parallel and/or big data jobs without GPU
|1
|1
|384
|0
| 2 x Intel Ivy Bridge 3.0 GHz
|NA
| --
|-
|11,000 GB
|16 cores no GPU
|16
|0
|$172.80
|-
|32 cores no GPU
|32
|0
|$264.96
|-
|16 cores + 2 GPU
|16
|1  
|$302.40
|-
|32 cores + 2 GPU
|32
|2
|$475.20
|-
|40 cores + 8 GPU
|40
|8
|$760.0
|-
|-
|Fat node 1
|64 cores + 8 GPU
|Part of Penzias
|64
|Big data jobs without GPU
|8
|1
|$950.40
|24
|}


| 2 x Intel Sandy Bridge 2.30 GHz
{| class="wikitable mw-collapsible"
| --
|+Fees for lease a node(s) for <span style="color:red;"> non </span> -MAP users
|768 GB
|Job (non-MAP users)
|Cpu cores
|GPU
|Cost/month
|-
|-
|Fat node 2
|1 core no GPU
|Part of Penzias
|Big data jobs without GPU
|1
|1
|24
|0
 
|NA
| 2 x Intel Sandy Bridge 2.30 GHz
|-
| --
|16 cores no GPU
|1500 GB
|16
|0
|$249.82
|-
|-
|Fat node 3
|32 cores no GPU
|Part of Penzias
|Big data jobs without GPU
|1
|32
|32
| 2 x Intel Haswell 2.30 GHz
|0
| --
|$497.64
|768 GB
|-
|-
|Condo 1
|16 cores + 1 GPU
|Part of Condo
|16
|Sequential and parallel jobs with/without GPU
|2
|1
|$443.23
|64
|2 x AMD EPYC Roma, 2.10 GHz
|2 x A30
|256 GB
|-
|-
| Condo 2
|32 cores + 2 GPU
|Part of Condo
|32
|Sequential and parallel jobs with/without GPU
|2
| 1
|$886.64
|64
|2 x AMD EPYC Roma, 2.10 GHz
|2 x A30
|256 GB
|-
|-
|Condo 3
|40 cores + 8 GPU
|Part of Condo
|40
|Sequential and parallel jobs with/without GPU
|8
|$1399.68
|}
 
 
d.     '''Condo Ownership (COP)'''


|1
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner. 
| 128
{| class="wikitable mw-collapsible"
|2 x AMD EPYC Roma, 2.10 GHz
|+Condo owners costs per year
| 1 x A40
|Type of condo node
|512 GB
|Cpu cores
|GPU
|Cost/year
|-
|Large hybrid SXM
|128
|8
|$4518.92
|-
|Small hybrid
|48
|2
|$1540.54
|-
|Medium compute
|96
|0
|$2464.86
|-
|-
| Condo 4
|Large compute
|Part of Condo
|128
|Sequential and parallel jobs with/without GPU
|0
|$3286.49
|}
 


|1
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.   
{| class="wikitable mw-collapsible"
|+Type of nodes and lease fees for condo nodes
!Type of node
!Renters cost/month
!Long term (90+ days) rent cost/month
!CPU/node
!CPU type
!GPU/node
!GPU type
!GPU interface
|-
|Laghe Hybrid
|$602.52
|$564.86
|128
|128
|2 x AMD EPYC Roma, 2.10 GHz
|EPYC, 2.2 GHz
|2 x A40
|8
| 512 GB
|A100/80
|SXM
|-
|-
|Condo 5
|Small Hybrid
|Part of Condo
|$205.41
|Sequential and parallel jobs with/without GPU
|$192.57
|1
|48
|64
|EPYC, 2.8 GHz
|2 x AMD EPYC Roma, 2.10 GHz
|2
| --
|A40, A30, L40
|512 GB
|PCIe v4
|-
|-
|Karle
|Medium Non GPU
|SMP
|$328.65
| Visualization, Matlab, Mathematica
|$308.11
|1
|96
| 72*
|EPYC, 4.11GHz
 
|48
|2 x Intel Sandy Bridge 2.30 GHz
|None
| --
|NA
|768 GB
|-
|-
| Chizen
|Lagre Non GPU
|SMP
|$438.20
|$410.81
|128
|EPYC, 2.0 GHz
|128
|None
|NA
|}


|Gateway only
=== Free time ===
In order to establish a project all new users from colleges that participate in MAP (B and C only) plan are entitled to free '''11520 CPU hours and 1440 GPU hours.''' Any additional hours are charged with MAP plan rates. Note that '''<u>free time is per user account not per project</u>''' so any user can have free time only once. External collaborators to CUNY are not normally eligible for free time. '''<u>Please contact CUNY-HPCC director for  further details.</u>'''


|1 (redundant)
== Support for research grants ==
| --
'''<u>All proposals dated on Jan 1st 2026 (<span style="color:red;"> 01/01/26 </span>) and later</u>''' that require computational resources '''<u>must include budget for cost recovery fees at CUNY-HPCC.</u>'''  For a project the PI can choose between:
| --
| --
|64 GB
|-
|MHN
|SMP
|Master Head Node


|2 (redundant)
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource.
* use "on-demand" resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed.
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science).


| --
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  '''<u>contact the Director of CUNY-HPCC Dr. Alexander Tzanov</u>'''  (alexander.tzanov@csi.cuny.edu) and discuss  the project's computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    
| --
| --
| --
|-
|Cea
|SMP
|File Transfer Node
|1
| --
| --
| --
| --
|}
<nowiki>*</nowiki> hyperthread


==Partitions and jobs ==
== Partitions and jobs ==
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in partitions. The main partition is called production. This is routing partition which distributes the jobs in several sub-partitions depend on job’s requirements. Thus the serial job submitted in '''production''' will land in '''partsequential''' partition.  No SLURM Pro scripts should be ever used and all existing SLURM scripts must be converted to SLURM before use. The table below shows the limitations of the partitions.
The only way to submit job(s) to HPCC servers is through SLURM batch system.  Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in '''partitions'''. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).
{| class="wikitable sortable"
{| class="wikitable mw-collapsible"
|+
|+
!Partition
!Partition
!Max cores/job
!Max cores/job
!Max jobs/user  
!Max jobs/user
!Total cores/group
!Total cores/group
!Time limits
!Time limits
!Tier
!
!GPU types
!
|-
|-
|production
|partnsf
|128
|128
|50
|50
|256
|256
|240 Hours
|240 Hours
|Advanced
|
|K20m, V100/16, A100/40
|
|-
|-
|partedu
|partchem
|128
|50
|256
|No limit
|Condo
|
|A100/80, A30
|
|-
|partcfd
|96
|50
|96
|No limit
|Condo
|
|A40
|
|-
|partsym
|96
|50
|96
|No limit
|Condo
|
|A30
|
|-
|partasrc
|48
|16
|16
|2
|16
|216
|No limit
|72 Hours
|Condo
|
|A30
|
|-
|-
|partmath
|partmatlabD
|128
|128
|128
|128
|50
|256
|240 Hours
|240 Hours
|Advanced
|
|V100/16,A100/40
|
|-
|-
|partmatlab
|partmatlabN
|1972
|384
|50
|50
|1972
|384
|240 Hours
|240 Hours
|Advanced
|
|None
|
|-
|-
|partdev
|partphys
|16
|96
|16
|50
|16
|96
|4 Hours
|No limit
|Condo
|
|L40
|
|}
|}
o '''production''' is the main partition with assigned resources across all servers (except Math and Cryo).It is routing partition so the actual job(s) will be placed in proper sub-partition automatically. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.
o '''partedu'''  partition is only for education. Assigned resources are on educational server Herbert. Partedu is accessible only to students (graduate and/or undergraduate) and their professors who are registered for a class supported by HPCC. Access to this partition is limited by the duration of the class.


o '''partmatlab''' partition allows to run MATLAB's Distributes Parallel  Server across main cluster. Note however that parallel toolbox programs  can be submitted via production partition, but only as thread parallel jobs.  
* '''partnsf''' is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.
* '''partchem'''  is CONDO partition. 
* '''partphys'''  is CONDO partition
* '''partsym'''    is CONDO partition
* '''partasrc'''  is CONDO partition
* '''partmatlabD''' partition allows to run MATLAB's Distributes Parallel  Server across main cluster.  
* '''partmatlabN''' partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox
* '''partdev''' is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.


o '''partdev''' is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.
== Hours of Operation ==
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur).  Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance. Please plan accordingly.  Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.


==Hours of Operation ==
== User Support ==
The second and fourth Tuesday mornings in the month from 8:00AM to 12PM are normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly.  <br>
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center's systems to ease the transfer of applications
Unplanned maintenance to remedy system related problems may be scheduled as needed.  Reasonable attempts will be made to inform users running on those systems when these needs arise.
 
==User Support==
Users are encouraged to read this Wiki carefully. In particular, the sections on compiling and running
parallel programs, and the section on the SLURM batch queueing system will give you the essential
knowledge needed to use the CUNY HPC Center systems.  We have strived to maintain the most uniform
user applications environment possible across the Center's systems to ease the transfer of applications
and run scripts among them.   
and run scripts among them.   


The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY
community in parallel programming techniques, HPC computing architecture, and the essentials of using our
community in parallel programming techniques, HPC computing architecture, and the essentials of using our
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.     
schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit
is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and
architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the
CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures
at formal classes throughout the CUNY campuses.     


If you have problems accessing your account and cannot login to the ticketing service, please send an email to:
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:
Line 522: Line 642:
   [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu]  
   [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu]  


==Warnings and modes of operation==
== Warnings and modes of operation ==




1. hpchelp@csi.cuny.edu is for questions and accounts help communication '''only''' and does not accept tickets. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response.  
 
1. hpchelp@csi.cuny.edu is for questions and accounts help communication '''only''' and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response.  


2. '''E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.''' Messages originated from public mailers (google, hotmail, etc) are filtered out.
2. '''E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.''' Messages originated from public mailers (google, hotmail, etc) are filtered out.
Line 535: Line 656:
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using
our systems will be predictably good and productive.
our systems will be predictably good and productive.
== User Manual ==
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.

Latest revision as of 01:30, 10 December 2025

Hpcc-panorama3.png

The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. HPCC goals are to:

  • Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.
  • Create opportunities for the CUNY research community to develop new partnerships with the government and private sectors; and
  • Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.

Organization of systems and data storage (architecture)

All user data and project data are kept on Data Storage and Management System (DSMS) which is mounted only on login node(s) of all servers. Consequently, no jobs can be started directly from DSMS storage. Instead, all jobs must be submitted from a separate (fast but small) /scratch file system mounted on all computational nodes and on all login nodes. As the name suggests, the /scratch file system is not home directory for accounts nor can be used for long term data preservation. Users must use "staging" procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.

Upon registering with HPCC every user will get 2 directories:

/scratch/<userid> – this is temporary workspace on the HPC systems
/global/u/<userid> – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data
• In some instances a user will also have use of disk space on the DSMS in /cunyZone/home/<projectid> (IRods).
HPCC structure.png

The /global/u/<userid> directory has quota (see below for details) while the /scratch/<userid> do not have. However the /scratch space is cleaned up following the rules described below. There are no guarantees of any kind that files in /scratch will be preserved during the hardware crashes or cleaning up. Access to all HPCC resources is provided by bastion host called 'chizen. The Data Transfer Node called Cea allows file transfer from/to remote sites directly to/from /global/u/<userid> or to/from /scratch/<userid>

HPC systems

The HPC Center operates variety of architectures in order to support complex and demanding workflows. All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold two Tesla K20m (plugged through PCIe interface) while the most advanced ones support eight Ampere A100 GPU connected via SXM interface.

Overview of Computational architectures:

SMP servers have several processors (working under a single operating system) which "share everything". Thus all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.

Cluster is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates programs on and/or across those in order to perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect. Each SMP member of the cluster is called a node. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.

Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called Arrow. Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster Herbert dedicated only to education.

Distributed shared memory computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP. Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture. Similarly to SMP, the NUMA systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the NUMA node at Arrow named Appel. This node does not have GPU.

Infrastructure systems:

o Master Head Node (MHN/Arrow) is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.

o Chizen is a redundant gateway server which provides access to protected HPCC domain.

o Cea is a file transfer node allowing transfer of files between users’ computers to/from /scratch space or to/from /global/u/<usarid>. Cea is accessible directly (not only via Chizen), but allows only limited set of shell commands.

Table 1 below provides a quick summary of the attributes of each of the sub clusters of the main HPC Center called Arow.

Master Head Node Sub System Tier Type Type of Jobs Nodes CPU Cores GPUs Mem/node Mem/core Chip Type GPU Type and Interface
Arrow Penzias Advanced Hybrid Cluster Sequential & Parallel jobs w/wo GPU 66 16 2 64 GB 4 GB SB, EP 2.20 GHz K20m GPU, PCIe v2
Sequential & Parallel jobs 1 24 - 1500 GB 62 GB HL, 2.30 GHz -
36 - 768 GB 21 GB -
24 - 768 GB 32 GB -
Appel NUMA Massive Parallel, sequential, OpenMP 1 384 - 11 TB 28 GB IB, 3 GHz -
Cryo SMP Sequential and Parallel jobs, with GPU 1 40 8 1500 GB 37 GB SL, 2.40 GHz V100 (32GB) GPU, SXM
Blue Moon Hybrid Cluster Sequential and Parallel jobs w/wo GPU 24 32 - 192 GB 6 GB SL, 2.10 GHz -
2 32 2 V100(16GB) GPU, PCIe
Karle SMP Visualization, MATLAB/Mathematica 1 36* - 768 GB 21 GB HL, 2.30 GHz -
Chizen Gateway No jobs allowed -
CFD Condo SMP Parallel, Seq, OpenMP 1 48 2 768 GB EM, 4.8 GHz A40, PCIe, v4
1 48 - 512 GB ER, 4.3 GHz -
PHYS Condo SMP 1 48 2 640 GB ER, 4 GHz L40, PCIe, v4
1 48 - 512 GB ER, 4.3 GHz -
CHEM Condo SMP 1 48 2 256 GB EM, 2.8 GHz A30, PCIe, v4
1 128 8 512 GB ER, 2.0 GHz A100/40, SXM
ASRC Condo SMP 1 48 2 256 GB ER, 2.8 GHz A30, PCIe, v4

Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA

Recovery of operational costs

CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only operational costs with no profit for HPCC. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on unit-hour. The unit can be either CPU unit or GPU unit. The definitions of these is given in a table below:

Definitions of unit-hour
Type of resource Unit-hour For V100, A30, A40 or L40 For A100
CPU unit 1 cpu core/hour -- --
GPU unit (4 cpu cores + 1 GPU thread )/hour 4 cpu cores + 1 GPU 4 cpu cores and 1/7 A100

HPCC access plans

a.     Minimum access (MAP):

Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.

The MAP has 3 tiers:

·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges.

·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.

·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  

The MAP users get charged per CPU/GPU hour at low rate of $0.015 per cpu hour and $0.09 per GPU hour.

Cost recovery fees for MAP users
Job Cpu cores GPU Cost/hour
1 core no GPU 1 0 $0.015/hour
16 cores no GPU 4 0 $0.24/hour
 4 cores + 1 GPU 4 1 $0.15/hour
16 cores + 1 GPU 16 1   $0.33/hour
16 cores + 2 GPU 16 2   $0.42/hour
32 cores + 2 GPU 32 2   $0.66/hour
40 cores + 8 GPU 40 8 $1.32/hour


b.     Computing on demand (CODP)

Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are $0.018 per cpu hour and $0.11 per GPU hour.  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:

Cost recovery fees for CODP plan
Job Cpu cores GPU Cost/hour
1 core no GPU 1 0 $0.018/hour
16 cores no GPU 16 0 $0.288/hour
4 cores + 1 GPU 4 1 $0.293/hour
16 cores + 1 GPU 16 1   $0.334/hour
32 cores + 1 GPU 32 1   $0.666/hour
32 cores + 2 GPU 32 2 $0.756/hour


c. Leasing node(s) (LNP)

Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.  

Lease node(s) fees for MAP users
Job (MAP users) Cpu cores GPU Cost/30 days
1 core no GPU 1 0 NA
16 cores no GPU 16 0 $172.80
32 cores no GPU 32 0 $264.96
16 cores + 2 GPU 16 1   $302.40
32 cores + 2 GPU 32 2 $475.20
40 cores + 8 GPU 40 8 $760.0
64 cores + 8 GPU 64 8 $950.40
Fees for lease a node(s) for non -MAP users
Job (non-MAP users) Cpu cores GPU Cost/month
1 core no GPU 1 0 NA
16 cores no GPU 16 0 $249.82
32 cores no GPU 32 0 $497.64
16 cores + 1 GPU 16 2 $443.23
32 cores + 2 GPU 32 2 $886.64
40 cores + 8 GPU 40 8 $1399.68


d.     Condo Ownership (COP)

Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.

Condo owners costs per year
Type of condo node Cpu cores GPU Cost/year
Large hybrid SXM 128 8 $4518.92
Small hybrid 48 2 $1540.54
Medium compute 96 0 $2464.86
Large compute 128 0 $3286.49


Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.

Type of nodes and lease fees for condo nodes
Type of node Renters cost/month Long term (90+ days) rent cost/month CPU/node CPU type GPU/node GPU type GPU interface
Laghe Hybrid $602.52 $564.86 128 EPYC, 2.2 GHz 8 A100/80 SXM
Small Hybrid $205.41 $192.57 48 EPYC, 2.8 GHz 2 A40, A30, L40 PCIe v4
Medium Non GPU $328.65 $308.11 96 EPYC, 4.11GHz 48 None NA
Lagre Non GPU $438.20 $410.81 128 EPYC, 2.0 GHz 128 None NA

Free time

In order to establish a project all new users from colleges that participate in MAP (B and C only) plan are entitled to free 11520 CPU hours and 1440 GPU hours. Any additional hours are charged with MAP plan rates. Note that free time is per user account not per project so any user can have free time only once. External collaborators to CUNY are not normally eligible for free time. Please contact CUNY-HPCC director for further details.

Support for research grants

All proposals dated on Jan 1st 2026 ( 01/01/26 ) and later that require computational resources must include budget for cost recovery fees at CUNY-HPCC. For a project the PI can choose between:

  • lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource.
  • use "on-demand" resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed.
  • participate in CONDO tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science).

In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal. PI should contact the Director of CUNY-HPCC Dr. Alexander Tzanov (alexander.tzanov@csi.cuny.edu) and discuss the project's computational requirements including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning correct and optimal computational budget for the proposal.    

Partitions and jobs

The only way to submit job(s) to HPCC servers is through SLURM batch system. Any job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in partitions. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key. The table below shows the limitations of the partitions (in progress).

Partition Max cores/job Max jobs/user Total cores/group Time limits Tier GPU types
partnsf 128 50 256 240 Hours Advanced K20m, V100/16, A100/40
partchem 128 50 256 No limit Condo A100/80, A30
partcfd 96 50 96 No limit Condo A40
partsym 96 50 96 No limit Condo A30
partasrc 48 16 16 No limit Condo A30
partmatlabD 128 50 256 240 Hours Advanced V100/16,A100/40
partmatlabN 384 50 384 240 Hours Advanced None
partphys 96 50 96 No limit Condo L40
  • partnsf is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.
  • partchem is CONDO partition.
  • partphys is CONDO partition
  • partsym is CONDO partition
  • partasrc is CONDO partition
  • partmatlabD partition allows to run MATLAB's Distributes Parallel Server across main cluster.
  • partmatlabN partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox
  • partdev is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.

Hours of Operation

In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur). Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance. Please plan accordingly. Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.

User Support

Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems. We have strived to maintain the most uniform user applications environment possible across the Center's systems to ease the transfer of applications and run scripts among them.

The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY community in parallel programming techniques, HPC computing architecture, and the essentials of using our systems. Please follow our mailings on the subject and feel free to inquire about such courses. We regularly schedule training visits and classes at the various CUNY campuses. Please let us know if such a training visit is of interest. In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center, the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc. Staff has also presented guest lectures at formal classes throughout the CUNY campuses.

If you have problems accessing your account and cannot login to the ticketing service, please send an email to:

 hpchelp@csi.cuny.edu 

Warnings and modes of operation

1. hpchelp@csi.cuny.edu is for questions and accounts help communication only and does not accept tickets unless ticketing system is not operational. For tickets please use the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite often even same day response. During the weekend you may not get any response.

2. E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address. Messages originated from public mailers (google, hotmail, etc) are filtered out.

3. Do not send questions to individual CUNY HPC Center staff members directly. These will be returned to the sender with a polite request to submit a ticket or email the Helpline. This applies to replies to initial questions as well.

The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared to other HPC Centers of similar size our staff is extremely lean. Please make full use of the tools that we have provided (especially the Wiki), and feel free to offer suggestions for improved service. We hope and expect your experience in using our systems will be predictably good and productive.

User Manual

The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.