Main Page: Difference between revisions

From HPCC Wiki
Jump to navigation Jump to search
 
(115 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[File:CUNY-HPCC-HEADER-LOGO.jpg|center|frameless|789x789px]]
__TOC__
[[File:Hpcc-panorama3.png|center|frameless|1000x1000px]]
__TOC__  


The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314.  The CUNY-HPCC supports computational research and computational intensive courses on graduate and undergraduate level offered at all CUNY colleges in fields such as Computer Science, Engineering, Bioinformatics, Chemistry, Physics, Materials Science, Genetics, Genomics, Proteomics, Computational Biology, Finance, Economics, Linguistics, Anthropology, Psychology, Neuroscience, Computational Fluid Mechanics  and many others.  CUNY-HPCC  provides educational outreach to local schools and supports undergraduates who work in the research programs of the host institution (e.g. REU program from NSF). The primary mission of CUNY-HPCC is:
[[Image:hpcc-panorama3.png]]


* To enable advanced research and scholarship at CUNY colleges by providing faculty, staff, and students with access to high-performance computing, adequate storage resources and visualization resources;
The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the
* To enable advanced education and cross disciplinary  education by providing flexible and scalable resources;
campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314HPCC
* To provide CUNY faculty and their collaborators at other universities, CUNY research staff and CUNY graduate and undergraduate students with expertise in scientific computing, parallel scientific computing (HPC), software development, advanced data analytics, data driven science and simulation science, visualization, advanced database engineering, and others;
goals are to:
* Leverage the HPC Center capabilities to acquire additional research resources for CUNY faculty, researchers and students in existing and major new programs;
* Create opportunities for the CUNY research community to win grants from national funding institutions and to develop new partnerships with the government and private sectors.
CUNY-HPPC is voting member of '''Coalition for Academic Scientific Computation (CASC)'''. Originally formed in the 1990s as a small group of the heads of national supercomputing centers, CASC expanded to more than 100 member institutions representing many of the nation’s most forward-thinking universities and computing centers. CASC includes the leadership of large academic computing centers such as TACC or San Diego SC and recently attracts a greater diversity of smaller institutions such as non-R1s, HBCUs, HSIs, TCUs, etc. CASC’s mission as to be “''dedicated to advocating for the use of the most advanced computing technology to accelerate scientific discovery for national competitiveness, global security, and economic success, as well as develop a diverse and well-prepared 21st century workforce.”''


== CUNY-HPCC - Democratization of Research ==
:*Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.
In last few years the model of cloud computing (called also computing-on-demand) made the promise that anyone, no matter where the user is, could leverage almost unlimited computing resources. This computing supposed to “democratize” research and level the playing field, as it were. Unfortunately that is not entirely true (for now) because the cloud computing even available to nearly anyone, from nearly anywhere remains comparatively expensive to local resources and lacks the flexibility and accessibility of local support tailored to education and research offered by the local research HPCC. Indeed, every computational environment has limitations and a learning curve such that students and faculty coming from variety of  backgrounds and having might feel crushed and helpless without  close and personalized local support. in this sense the carefully designed, user centered, academically focused  HPC has the transformative capability for rapidly evolving computation and data-driven research, and creates opportunities for vast collaboration and convergence research activities and thus provides the real democratization of the research.
:*Create opportunities for the CUNY research community to develop new partnerships with the government and private sectors; and
:*Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.


== Pedagogical Value of CUNY- HPCC ==
==Organization of systems and data storage (architecture)==
CUNY-HPPC supports whole variety of  classes on graduate and undergraduate level from all CUNY-colleges, CUNY Graduate Center and CUNY Institutes.. It is important to mention that CUNY-HPCC impact goes beyond the STEM disciplines. Thus the CUNY-HPCC:
[[File:NAnoBio6.jpg|right|frameless|Dr Alexander Tzanov, the director of CUNY-HPCC speaks on NanoBioNYC workshop]]
* '''Allows to conduct analysis of datasets that are too large to work with easily on personal devices, or that cannot easily be shared or disseminated.''' These data sets are not coming only from STEM fields (i.e. finance, economics, linguistics etc.). Facilitating these analyses provides the students with opportunity to interact in real time with  increasingly large amounts of data, enabling them to gather important skills and experiences.
* '''Provides advanced computational architectures with modern processors, large memory, fast storage and accelerators. HPCC is capable to support any type of hand on experience for undergraduate or graduate students and fully supports “study-by-research” which requires computational support.'''
* '''CUNY-HPCC provides collaborative space for entire courses. T'''he multi-user capabilities and environment of HPCC facilitates collaborative work among learners and promotes more complex closer to reality learning problems.
* The large computational and visualization capabilities of CUNY-HPCC '''are enabler for applying analytical techniques too large for personal devices.''' Students can  run unattended parameter sweeps or workflows in order to explore the problem in detail. That self exploration has proven positive effect on learning.
* '''Use of CUNY-HPCC resources provides students with needed pre request skills and knowledge''' they may need later when explore larger HPC environments. For instance the CUNY-HPCC workflow and environment is extremely close to the environment of other research centers of ACCESS resources.
* '''CUNY-HPCC participates in educational programs for graduates such as NSF funded NanoBioNYC Ph.D. traineeship program at  CUNY.''' This program is focused on developing groundbreaking bio-nano science solutions to address urgent human and planetary health issues and preparing students to become tomorrow’s leaders in diverse STEM careers.


* '''CUNY-HPCC participates in NSF funded REU (Research Experience for Undergraduates) activities. Current REU grant is led by faculty from Computer Science Departments at Hunter College and College of Staten Island.'''
All user data and project data are kept on  Data Storage and Management System ('''DSMS''') which is mounted only on login node(s) of all servers. Consequently, no jobs can be started directly from '''DSMS''' storage.  Instead, all jobs must be submitted  from a separate (fast but small) '''/scratch''' file system mounted on all computational nodes and on all login nodes. As the name suggests, the '''/scratch''' file system is not  home directory  for accounts nor can be used for long term data preservation.  Users must use "staging" procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment. 


== Research Value of CUNY- HPCC ==
Upon  registering with HPCC every user will get 2 directories:
High performance computing (HPC) is a backbone of any modern simulation research, data driven research and whole spectra of AI and Machine Learning research. More than 80% of the modern research in STEM disciplines and engineering requires use of large computational platforms of different architectures with capabilities far beyond those of desktop, laptop or even advanced workstation. In a context of AI and ML research and AI enabled applications HPC is mandatory and vital because advances in that areas development are not possible without modern tensor enabled accelerators capable to support accelerators virtualization, direct access to main memory without CPU, unification of accelerators both inside the node and across nodes, fast supporting file system for both kernel and Neural Network (NN) based methodologies.   


CUNY-HPCC is '''focused on research''' and is designed to support any research at CUNY. Thus, HPCC operates '''professionally maintained and fully integrated research computing''' environment with various compute architectures (see below) covering this way the requirements of any research workflow in any discipline or combination of disciplines. The integration of various architectures and highly qualified professional staff allows HPCC to fully support AI, ML, data driven or simulation driven research project(s) or any research workflow(s) that combines these research paradigms. 
:• '''<font face="courier">/scratch/<font color="red"><userid></font color></font>''' – this is temporary workspace on the HPC systems
:• '''<font face="courier">/global/u/<font color="red"><userid></font color></font>''' – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data


==Available Computational Architectures and Storage Systems==
:• In some instances a user will also have use of disk space on the DSMS in '''<font face="courier">/cunyZone/home/<font color="red"><projectid></font color></font>''' (IRods).
[[File:HPCC_structure.png|center|frameless|900x900px]]
The '''/global/u/<userid>''' directory has quota (see below for details) while  the '''/scratch/<userid>''' do not have. However the '''/scratch''' space is cleaned up  following the rules described below. There are no guarantees of any kind that files in '''/scratch'''  will be preserved during the hardware crashes or cleaning up.  Access to all HPCC resources is provided by bastion host called '<nowiki/>'''''chizen'''''.  The Data Transfer Node called '''Cea''' allows file transfer from/to remote sites directly to/from '''<font face="courier">/global/u/<font color="red"><userid></font></font>''' or to/from  '''<font face="courier">/scratch/<font color="red"><userid></font></font>'''       


The HPC Center operates variety of architectures in order to support complex and demanding workflows.  The deployed systems include:  distributed memory (also referred to as “cluster”) computers, symmetric multiprocessor (also referred as SMP) and shared memory (also reffred as NUMA machines). 
==HPC systems==


=== Computational Systems ===
The HPC Center operates variety of architectures in order to support complex and demanding workflows.  All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold  two Tesla K20m (plugged through PCIe interface) while the most advanced ones  support eight Ampere A100 GPU connected via SXM interface.  
The table at the end of this section summarizes the available computational resources in 3 tiers at HPCC.  


'''SMP''' servers have several processors (working under a single operating system) which "share everything".  Thus  all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly for sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU. Currently, HPCC operates only one detached SMP server '''Karle'''. Karle has 72 cores large memory and one NVIDIA, L40 GPU suitable for visual analytics. The server is used for post process visualizations, insitu  visual analytics  and interactive MATLAB/Mathematica jobs.
''Overview of Computational architectures'':


'''Cluster''' is defined as a single system comprizing a set of SMP servers interconnected with high performance network. Specific software coordinates programs on and/or across those in order to  perform computationally intensive tasks. Each SMP member of the cluster is called a '''node'''. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.  
'''SMP''' servers have several processors (working under a single operating system) which "share everything". Thus all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.


'''Distributed shared memory''' computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP.  Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the '''NUMA''' systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data.
'''Cluster''' is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates  programs on and/or across those in order to  perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnectEach SMP member of the cluster is called a '''node'''. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.


Main computational resource at CUNY-HPCC is federated hybrid (CPU+GPU) cluster called ARROW. The first sub-cluster (Penzias) serves basic tier and has 62 compute nodes. Each node has 16 cores, 64GB memory and two Tesla K20m GPU. The interconnect is 56Gbps. Second sub-cluster (Blue-Moon) comprises of nodes each with 32 cores and 192 GB of memory, fat nodes, and NUMA node. The latter has 384 cores, 11TB unified memory and 4 x K20m GPU. Blue Moon sub-cluster has 32 NVIDIA GPU from Ampere and Volta family. The interconnect across cluster is 100 Gpbs Infiniband. The third sub-cluster (Condo) combines variety of nodes all with AMD EPYC processors. The cores per node vary from 48 to 128. The CONDO nodes hold total of 16 NVIDIA GPU all from Ampere family (A30,A40,L40 and A100). The interconnect is 200 Gbps for all nodes and 400 Gbps for dense GPU nodes with SXM GPU interface.  
Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called '''Arrow'''. Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory  node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster '''Herbert''' dedicated only to education.  


In addition to compute resources CUNY-HPCC operates visualization cluster called KARLE. That server has 72 cores, 768 GB of memory and one Ampere class L40 GPU. The server mounts directly the main file system and thus allows insitu visualization in addition to traditional post processing visualization, visual analytics and interactive jobs. The server is also used for interactive Matlab/Mathematica sessions.  
'''Distributed shared memory''' computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP. Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture.  Similarly to SMP, the '''NUMA''' systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the '''NUMA''' node at Arrow named '''Appel'''.  This node does not have GPU.  


The table in this section summarize the CUNY-HPCC resources. Note that users do not have direct access to and cannot submit jobs directly on any of described sub-clusters. Instead, all jobs are automatically distributed to sub-clusters by job management software running on master head node (MHN).
'' Infrastructure systems'':


===Storage Systems===
o Master Head Node ('''MHN/Arrow)''' is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.   
The '''/scratch''' file system mentioned above is small fast file system mounted on all nodes. This file system resides on solid state drives and has capacity of '''256GB'''. Note that files on '''/scratch''' are '''not backup-ed and are not protected.''' This file system does not have quota so users can submit large jobs. The file system is automatically purged if either: '''1.''' the load of the file system exceeds 70% or '''2.''' file(s) are not accessed for <u>60 days whatever comes first.</u> The partition '''/global/u''' in main HPFS file system holds user home directoriesThe HPFS is the hybrid file system and combines SSD and HDD (solid state and hard disks) with capabilities for dynamic relocation of files. The capacity is 2 Peta Bytes (PB). This file system, was purchased under NSF grant OAC-2215760. That file system is mounted on all  nodes. 


===Support Systems (Support Infrastructure Systems)===
o '''Chizen''' is a redundant gateway server which provides access to protected HPCC domain.
These systems are responsible to provide access to HPCC resources, job(s) submission, backup and file transfers. In addition to them there are other servers supporting back-end of HPCC. These back-end computers are not visible or accessible by users.  


* Master Head Node ('''MHN''') is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus.
o '''Cea''' is a file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/<usarid>. '''Cea''' is accessible directly (not only via '''Chizen'''), but allows only limited set of shell commands.


* '''Chizen''' is a redundant gateway server which provides access to protected HPCC domain.
'''Table 1''' below provides a quick summary of the attributes of each of the sub clusters of the main  HPC Center called Arow.


* '''Cea''' is a secure file transfer node allowing transfer of files between users’ computers to/from  /scratch space or to/from /global/u/<usarid>. '''Cea''' is accessible directly (not only via '''Chizen'''), but allows only very limited set of shell commands.
{| class="wikitable"
 
|+
[[File:Resources HPCC.jpg|frameless|900px|Overview of HPCC computational and visualization resources]]
!Master Head Node
 
!Sub System
==Overview of  HPCC's Research Computing Infrastructure (RCI)==
!Tier
[[File:HPCC_structure_12_24.png|right|frameless|682x682px|Organization of HPCC resources]]
!Type
The above resources are organized in research computing infrastructure depicted in the figure on right. In order to support various types of research projects the CUNY-HPCC supports a variety of computational architectures.  All computational resources are organized in '''3''' tiers - '''''Condominium Tier (CT), Free Tier''''' (FT) and '''''Advanced Tier (AT) plus visualization (Viz).'''''  (see next section for details). All nodes in all tiers are attached to central file system '''HPFS''' which provides  '''/scratch''' and  Global Storage ('''GS''') - '''/global/u/.'''  The below table shows tiers and their use. Note that * denotes hyper-thread and '''**''' denotes outdated GPU not suitable for large scale research, but useful for education. 
!Type of Jobs
{| class="wikitable mw-collapsible"
!Nodes
|+Tiers and their use
!CPU Cores
|'''Tier'''
!GPUs
|'''# Cores'''
!Mem/node
|'''# GPU'''
!Mem/core
|'''Use'''
!Chip Type
!GPU Type and Interface
|-
| rowspan="17" |'''<big>Arrow</big>'''
| rowspan="4" |Penzias
| rowspan="10" |Advanced
| rowspan="4" |Hybrid Cluster
|Sequential & Parallel jobs w/wo GPU
|66
|16
|2
|64 GB
|4 GB
|SB, EP 2.20 GHz
|K20m GPU, PCIe v2
|-
| rowspan="3" |Sequential & Parallel jobs
| rowspan="3" |1
|24
| -
|1500 GB
|62 GB
| rowspan="3" |HL, 2.30 GHz
| -
|-
|36
| -
|768 GB
|21 GB
| -
|-
|24
| -
|768 GB
|32 GB
| -
|-
|Appel
|NUMA
|Massive Parallel, sequential, OpenMP
|1
|384
| -
|11 TB
|28 GB
|IB, 3 GHz
| -
|-
|-
|Condo
|Cryo
|740
|SMP
|16
|Sequential and Parallel jobs, with GPU
|Heavy  instruction parallel and distributed parallel or hybrid (OMP + MPI) calculations which can be GPU accelerated, massive GPU enabled simulations requiring matrix of modern GPU, advanced AI and ML, Big Data in all disciplines, advanced CFD, Genomics, Finance, Econometrics, Neuroscience etc. Virtualization of the GPUs is possible. Support for GPU virtualization and unification of GPUs across nodes over 400Gbps network.
|1
|40
|8
|1500 GB
|37 GB
|SL, 2.40 GHz
|V100 (32GB) GPU, SXM
|-
|-
|Advanced
| rowspan="2" |Blue Moon
|1336
| rowspan="2" |Hybrid Cluster
| rowspan="2" |Sequential and Parallel jobs w/wo GPU
|24
|32
|32
|Large Instruction parallel and distributed parallel or hybrid (OMP + MPI) calculations which can be GPU accelerated. Big data jobs in all disciplines i.e. Genomics, Proteomics, Genetics, AI, CFD, Finance etc.
| -
| rowspan="2" |192 GB
| rowspan="2" |6 GB
| rowspan="2" |SL, 2.10 GHz
| -
|-
|2
|32
|2
|V100(16GB) GPU, PCIe
|-
|Karle
|SMP
|Visualization, MATLAB/Mathematica
|1
|36*
| -
|768 GB
|21 GB
|HL, 2.30 GHz
| -
|-
|Chizen
|Gateway
|No jobs allowed
| colspan="7" | -
|-
| rowspan="2" |CFD
| rowspan="2" |Condo
| rowspan="2" |SMP
| rowspan="7" |Parallel, Seq, OpenMP
|1
|48
|2
|768 GB
|
|EM, 4.8 GHz
|A40, PCIe, v4
|-
|-
|Basic
|1
|992
|48
|124**
| -
|Parametric Studies, Sequential jobs, Small to medium distributed parallel jobs, Small hybrid (OMP+MPI) jobs up to 16 cores per node, Education, hand on experience including GPU programming principles.
|512 GB
|
|ER, 4.3 GHz
| -
|-
|-
|Viz
| rowspan="2" |PHYS
|36/72*
| rowspan="2" |Condo
|NA
| rowspan="2" |SMP
|In-situ (real time) and post processing visualization.    
|1
|48
|2
|640 GB
|
|ER, 4 GHz
|L40, PCIe, v4
|-
|1
|48
| -
|512 GB
|
|ER, 4.3 GHz
| -
|-
| rowspan="2" |CHEM
| rowspan="2" |Condo
| rowspan="2" |SMP
|1
|48
|2
|256 GB
|
|EM, 2.8 GHz
|A30, PCIe, v4
|-
|1
|128
|8
|512 GB
|
|ER, 2.0 GHz
|A100/40, SXM
|-
|ASRC
|Condo
|SMP
|1
|48
|2
|256 GB
|
|ER, 2.8 GHz
|A30, PCIe, v4
|}
|}
Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER  = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA 


===Organization of Computational and Visualization Resources===
== Recovery of operational costs ==
The  computational resources in 3 tiers mentioned above are combined within ARROW hybrid cluster. In addition CUNY-HPCC operates specialized visualization server which shares the file system with all nodes. That allows to conduct i<u>n-situ  visualizations</u> of simulations. The description in nodes is given in a table below. Note that '''black''' denotes '''basic''' tier, '''blue''' denotes '''condo''' tier and '''orange''' denotes '''advanced''' tier. The '''yellow''' marks the '''visualization''' tier.   
CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or  College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only '''<u>operational costs with no profit for HPCC</u>'''. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on '''<u>unit-hour</u>'''. The unit can be either CPU unit or GPU unit. The definitions of these is given in a table below:
{| class="wikitable mw-collapsible"
|+Definitions of unit-hour
!Type of resource
!Unit-hour
!For V100, A30, A40 or L40
!For A100
|-
|CPU unit
|1 cpu core/hour
| --
| --
|-
|GPU unit
|(4 cpu cores + 1 GPU thread )/hour
|4 cpu cores + 1 GPU
|4 cpu cores and 1/7 A100
|}


=== HPCC access plans  ===
a.     '''Minimum access (MAP):'''


====Condominium  Tier====
Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.
Condominium tier (called '''condo''') organizes resources purchased and owned by faculty, but '''fully maintained and supported by CUNY-HPCC'''. The list of available resources is given in above table marked in '''blue'''. All condo nodes have advanced GPU, large shared memory and advanced GPU with memory per GPU board from 24 to 40GB. The participation in this tier is '''strictly voluntary'''. The particular condo server can be co-owned and shared between researchers, groups or departments across CUNY. HPCC will ensure fair share use among registered users for any node. Thus several faculty/research groups can combine finds to purchase and consequently share the hardware (a node or several nodes). All nodes in this tier must meet certain hardware specifications including to be fully warranted for time life of the node(s) in order to be accepted. Faculty, researchers and staff who want to participate in condominium must sent a request by e-mail to hpchelp@csi.cuny.edu and consult CUNY-HPCC director before making a purchase. HPCC will not accept outdated servers or those not matching HPCC infrastructure and secure requirements and policies. Condominium tier takes advantage of the professional support from HPCC staff and: 


*'''Makes possible vertical and horizontal collaboration between research groups across CUNY.'''
The MAP has 3 tiers:


*'''Expands computational resources across CUNY by making possible to combine and use small grants and “left over” funds to obtain significant computational power without paying for supporting infrastructure or local storage. It is wise way to get maximal cores for the bucks.'''
·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges.  


*'''Promotes large scale collaborative project and helps faculty to conduct successful interdisciplinary and/or complex projects leading to increased success in grants applications.'''
·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  


*'''Allows precise planning of research workflow(s) because condo jobs do not compete with other users’ jobs for resources.'''
·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  


The owners (and their groups) of condo resources have '''guaranteed''' access to their servers at any time.Jobs are submitted via MHN node. To access their own node the condo users must  specify their own private partition and use specific QOS qualifier. In addition there are partitions which operate over two or more nodes owned by condo members.  The condo tier members benefit from professional support from HPCC staff, in addition to professional maintenance and security hardening of the servers. Upon approval (from the node owner)  any idle node(s) can be used by any other condo member(s). For instance a condo member can borrow (for agreed time) node with more advanced GPUs than those installed on his/her own node(s). The owners of the equipment are responsible for any repair costs for their node(s). Other users may rent any of the described in a table condo resources from owners of the server/node. Upon agreement with owners CUNY-HPCC  may harvest unused cycles and provide other members of CUNY community with cpu time.  
The MAP users get charged per CPU/GPU hour at low rate of '''<u>$0.015 per cpu hour and $0.09 per GPU hour</u>.''' 


In sum the benefits of condo are:     
{| class="wikitable mw-collapsible"
|+Cost recovery fees for MAP users
|Job
|Cpu cores
|GPU
|Cost/hour
|-
|1 core no GPU
|1
|0
|$0.015/hour
|-
|16 cores no GPU
|4
|0
|$0.24/hour
|-
| 4 cores + 1 GPU
|4
|1
|$0.15/hour
|-
|16 cores + 1 GPU
|16
|1  
|$0.33/hour
|-
|16 cores + 2 GPU
|16
|2  
|$0.42/hour
|-
|32 cores + 2 GPU
|32
|2  
|$0.66/hour
|-
|40 cores + 8 GPU
|40
|8
|$1.32/hour
|}


*'''5 year guaranteed lifecycle''' - condo resources will be fully supported by HPCC for 5 years. However if additional support contract with HPCC is established servers can be support for up to '''7''' years.
*'''Access to more cpu cores than owned/purchased''' by sharing resources with other condo members.
*'''Borrow and use condo server with higher capabilities than one owned e.g. borrow server with A100 GPU but own server with A30 GPU.'''


*'''Advanced and Dedicated Support''' - HPCC staff will install, upgrade, secure and maintain condo hardware throughout its lifecycle. Unlimited tickets.
b.     '''Computing on demand (CODP)'''
*'''Access to main server resources in addition to condo.'''
*'''Access to HPC visual analytics server.'''
*'''No time or job limits when use owned server.'''
*'''Option to lease own server to other CUNY researcher upon mutual agreement.'''


Responsibilities of condo memnbers 
Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are '''$0.018 per cpu hour and $0.11 per GPU hour.'''  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:


*'''Agree to share their resources''' (when idle or partially available) with other members of a condo.
{| class="wikitable mw-collapsible"
*'''To include in their research and instrumentation grants money''' for computing used to cover operational expences (non-tax-levy expenses) of the HPCC.
|+Cost recovery fees for CODP plan
The table below summarized the available resources
|Job
|Cpu cores
|GPU
|Cost/hour
|-
|1 core no GPU
|1
|0
|$0.018/hour
|-
|16 cores no GPU
|16
|0
|$0.288/hour
|-
|4 cores + 1 GPU
|4
|1
|$0.293/hour
|-
|16 cores + 1 GPU
|16
|1  
|$0.334/hour
|-
|32 cores + 1 GPU
|32
|1  
|$0.666/hour
|-
|32 cores + 2 GPU
|32
|2
|$0.756/hour
|}




====Advanced Tier====
c.  '''Leasing node(s) (LNP)'''
The advanced tier holds the resources used for more advanced or large scale simulation and visual computing research which utilizes distributed parallel codes and/or instruction parallel (OMP) jobs with or without GPU; very large memory jobs on 3 fat nodes and GPU enabled GPU jobs. Note that this tier does not support NVSwitch technology, so the GPU (all Volta class) cannot be shared across nodes. The resources for this tier are detailed in above table ('''orange'''). This tier provides nodes with Volta class GPUs with 16 GB and 32 GB on board. 


====Basic Tier====
Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.  
The basic tier provides resources for sequential and moderate size parallel jobs. The resources in this tier are described in above table ('''black font)'''.The openMP jobs can be run only in a scope of single node. The distributed parallel jobs (MPI) can be run across cluster. This tier alsop support Matlab Parallel server which can be run across nodes. The users also can run GPU enabled jobs since this tier has 124 GPU Tesla K20m. Please note that these GPU are not supported by NVIDIA anymore. Many applications also may not support this GPU as well. The table below summaries the resources for their tier.  


====Visualization====
{| class="wikitable mw-collapsible"
HPCC supports specialized viasualization server. That server shares main file system of all nodes and thus allows to conduct in-situations visualization and/or post processing visualization. The parameters of the server are written in table above ('''yellow''').
|+Lease node(s) fees for MAP users
==Quick Start to HPCC==
|Job (MAP users)
|Cpu cores
|GPU
|Cost/30 days
|-
|1 core no GPU
|1
|0
|NA
|-
|16 cores no GPU
|16
|0
|$172.80
|-
|32 cores no GPU
|32
|0
|$264.96
|-
|16 cores + 2 GPU
|16
|1  
|$302.40
|-
|32 cores + 2 GPU
|32
|2
|$475.20
|-
|40 cores + 8 GPU
|40
|8
|$760.0
|-
|64 cores + 8 GPU
|64
|8
|$950.40
|}


===Access to HPCC Resources===
{| class="wikitable mw-collapsible"
Access to all HPCC resources is the same from in CSI or outside CSI campus. The HPCC resources are placed in secure domain accessible only via bastion host called '''''chizen.''''' This server is redundant and runs very limited shell. The server is dedicated only to provide access and cannot be used to store any data. Note that all data placed on chizen will be deleted automatically.  However chizen allows secure tunneling of data (without save data on chizen itself) from to user machines and HPFS/scratch file system. Please check the section "File transfer" below for details. However for data transfer is preferable to use the File Transfer Node (FTN) called '''Cea which''' allows direct secure file transfer from/to '''<font face="courier">/global/u/<font color="red"><userid></font></font>''' or to/from  '''<font face="courier">/scratch/<font color="red"><userid>.</font></font>'''
|+Fees for lease a node(s) for <span style="color:red;"> non </span> -MAP users
|Job (non-MAP users)
|Cpu cores
|GPU
|Cost/month
|-
|1 core no GPU
|1
|0
|NA
|-
|16 cores no GPU
|16
|0
|$249.82
|-
|32 cores no GPU
|32
|0
|$497.64
|-
|16 cores + 1 GPU
|16
|2
|$443.23
|-
|32 cores + 2 GPU
|32
|2
|$886.64
|-
|40 cores + 8 GPU
|40
|8
|$1399.68
|}


===Accounts===
Every user must register with HPCC and obtain an account. Please see the section "Administrative information" for further details how to register with HPCC. Upon  registering with HPCC every user will get 2 directories:


:• '''<font face="courier">/scratch/<font color="red"><userid></font></font>''' – this is temporary workspace on the HPC systems;
d.     '''Condo Ownership (COP)'''
:• '''<font face="courier">/global/u/<font color="red"><userid></font></font>''' – space for “home directory”, i.e., storage space on the HPFS for program, scripts, and data;
 
:• In some instances a user can also have use of disk space on the iRODS in '''<font face="courier">/cunyZone/home/<font color="red"><projectid></font></font>''' (iRODS).
Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner. 
The '''/global/u/<userid>''' directory has quota (see "Administrative information" section for details).  
{| class="wikitable mw-collapsible"
|+Condo owners costs per year
|Type of condo node
|Cpu cores
|GPU
|Cost/year
|-
|Large hybrid SXM
|128
|8
|$4518.92
|-
|Small hybrid
|48
|2
|$1540.54
|-
|Medium compute
|96
|0
|$2464.86
|-
|Large compute
|128
|0
|$3286.49
|}
 


===Jobs===
Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.  
All jobs must be submitted  to execution from master head node (MHN) despite of the tier. However is important to mention that the users '''do not need to address particular resource/node directly''' since the jobs are automatically placed in proper tier and proper node based on job submission policy and available resources. All jobs are subject to '''strict fair share policy''' which allows all users to get equal share of resources. There are '''no "privileged" queues of any kind.''' In brief all jobs at HPCC must: 
{| class="wikitable mw-collapsible"
|+Type of nodes and lease fees for condo nodes
!Type of node
!Renters cost/month
!Long term (90+ days) rent cost/month
!CPU/node
!CPU type
!GPU/node
!GPU type
!GPU interface
|-
|Laghe Hybrid
|$602.52
|$564.86
|128
|EPYC, 2.2 GHz
|8
|A100/80
|SXM
|-
|Small Hybrid
|$205.41
|$192.57
|48
|EPYC, 2.8 GHz
|2
|A40, A30, L40
|PCIe v4
|-
|Medium Non GPU
|$328.65
|$308.11
|96
|EPYC, 4.11GHz
|48
|None
|NA
|-
|Lagre Non GPU
|$438.20
|$410.81
|128
|EPYC, 2.0 GHz
|128
|None
|NA
|}


*'''>>''' Start from user's directory on '''scratch''' file system '''- /scratch/<userid>''' . Jobs cannot be started from users' home directories - '''/global/u/<userid>'''
=== Free time ===
*'''>>''' Use SLURM job submission system (job scheduler)'''.''' All jobs submission scripts written for other job scheduler(s) (i.e. PBS pro) <u>must be converted to SLURM syntax.</u>
In order to establish a project all new users from colleges that participate in MAP (B and C only) plan are entitled to free '''11520 CPU hours and 1440 GPU hours.''' Any additional hours are charged with MAP plan rates. Note that '''<u>free time is per user account not per project</u>''' so any user can have free time only once. External collaborators to CUNY are not normally eligible for free time. '''<u>Please contact CUNY-HPCC director for  further details.</u>'''
*'''>>''' All jobs in all tiers <u>must start from Master Hear Node (MHN)</u>.  In near future the process of submission of jobs will be improved further with launch of HPC job submission portal.


All useful users' data must be kept in user's home directory '''/global/u/<userid>.'''. This file system is mounted only on login node. In contrast '''/scratch''' is mounted on all nodes and thus all jobs can be submitted only from '''/scratch'''. It is important to remember that '''/scratch''' is not main storage for users' accounts (home directories), but a <u>temporary storage used for job submission only.</u> Thus:   
== Support for research grants ==
#data in '''/scratch''' are not protected, preserved or backup-ed and can be lost at any time. CUNY-HPCC has no obligation to preserve user data in '''/scratch.'''
'''<u>All proposals dated on Jan 1st 2026 (<span style="color:red;"> 01/01/26 </span>) and later</u>''' that require computational resources '''<u>must include budget for cost recovery fees at CUNY-HPCC.</u>''' For a project the PI can choose between:  
#'''/scratch''' undergoes regular and automatic file purging when either or both conditions are satisfied:  
##load of the '''/scratch''' file system reaches '''70+%.'''
##there is/are inactive file(s) older than '''60 days.'''


===SLURM Partitions===
* lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource.
The only way to submit job(s) to CUNY-HPCC servers is through SLURM batch system. Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in partitions. The main partition is called production. This is routing partition which distributes the jobs in several sub-partitions depend on job’s requirements. Thus, the serial job submitted in '''production''' will land in '''partsequential''' partition.  No SLURM Pro scripts should be ever used and all existing SLURM scripts must be converted to SLURM before use. The table below shows the limitations of the partitions. Condo tier operates over 7 private partitions. For more details see the section "Running jobs". The '''basic and advanced tiers utilize only public partitions described in a table below.''' 
* use "on-demand" resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed.
{| class="wikitable sortable mw-collapsible"
* participate in CONDO  tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science).
|+Public partitions for Basic and Advanced Tier
 
In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal.  PI should  '''<u>contact the Director of CUNY-HPCC Dr. Alexander Tzanov</u>'''  (alexander.tzanov@csi.cuny.edu) and discuss  the project's computational  requirements  including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning  correct and optimal computational budget for the proposal.    
 
== Partitions and jobs ==
The only way to submit job(s) to HPCC servers is through SLURM batch system. Any  job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in '''partitions'''. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key.   The table below shows the limitations of the partitions (in progress).
{| class="wikitable mw-collapsible"
|+
!Partition
!Partition
!Max cores/job
!Max cores/job
Line 175: Line 532:
!Total cores/group
!Total cores/group
!Time limits
!Time limits
!Tier
!
!GPU types
!
|-
|-
|production
|partnsf
|128
|128
|50
|50
|256
|256
|240 Hours
|240 Hours
|Advanced
|
|K20m, V100/16, A100/40
|
|-
|partchem
|128
|50
|256
|No limit
|Condo
|
|A100/80, A30
|
|-
|-
|partedu
|partcfd
|96
|50
|96
|No limit
|Condo
|
|A40
|
|-
|partsym
|96
|50
|96
|No limit
|Condo
|
|A30
|
|-
|partasrc
|48
|16
|16
|16
|2
|No limit
|216
|Condo
|72 Hours
|
|A30
|
|-
|-
|partmath
|partmatlabD
|128
|128
|128
|128
|50
|256
|240 Hours
|240 Hours
|Advanced
|
|V100/16,A100/40
|
|-
|-
|partmatlab
|partmatlabN
|1972
|384
|50
|50
|1972
|384
|240 Hours
|240 Hours
|Advanced
|
|None
|
|-
|-
|partdev
|partphys
|16
|96
|16
|50
|16
|96
|4 Hours
|No limit
|Condo
|
|L40
|
|}
|}
o '''production''' is the main partition with assigned resources across all servers (except Math and Cryo).It is routing partition so the actual job(s) will be placed in proper sub-partition automatically. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.
o '''partedu'''  partition is only for education. Assigned resources are on educational server Herbert. Partedu is accessible only to students (graduate and/or undergraduate) and their professors who are registered for a class supported by HPCC. Access to this partition is limited by the duration of the class.
o '''partmatlab''' partition allows to run MATLAB's Distributes Parallel  Server across main cluster. Note however that parallel toolbox programs  can be submitted via production partition, but only as thread parallel jobs.
o '''partdev''' is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of '''4 hours.'''


'''NB!''' '''Condo''' tier operates over '''7 private partitions.''' For more details see the section "Running jobs".  
* '''partnsf''' is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.
* '''partchem'''  is CONDO partition. 
* '''partphys'''   is CONDO partition
* '''partsym'''   is CONDO partition
* '''partasrc'''  is CONDO partition
* '''partmatlabD''' partition allows to run MATLAB's Distributes Parallel  Server across main cluster.  
* '''partmatlabN''' partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox
* '''partdev''' is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.


===Hours of Operation===
== Hours of Operation ==
The HPCC operates 24/7 with goal to be online minimum 250 days. The second Tuesday mornings in the month from 8:00AM to 12PM are normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly. Unplanned maintenance to remedy system related problems may be scheduled as needed. Reasonable attempts will be made to inform users running on those systems when these needs arise.
In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur). Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance.  Please plan accordingly. Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.


===User Support===
== User Support ==
Users are encouraged to read this Wiki carefully. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY-HPCC systems.  We have strived to maintain the most uniform
Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems.  We have strived to maintain the most uniform user applications environment possible across the Center's systems to ease the transfer of applications
user applications environment possible across the Center's systems to ease the transfer of applications
and run scripts among them.   
and run scripts among them.   


The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY community in parallel programming techniques, HPC computing architecture, and the essentials of using our systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY-HPCC, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.     
The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY
community in parallel programming techniques, HPC computing architecture, and the essentials of using our
systems. Please follow our mailings on the subject and feel free to inquire about such courses.  We regularly schedule training visits and classes at the various CUNY campuses.  Please let us know if such a training visit is of interest.  In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center,  the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc.  Staff has also presented guest lectures at formal classes throughout the CUNY campuses.     


If you have problems accessing your account and cannot login to the ticketing service, please send an email to:
If you have problems accessing your account and cannot login to the ticketing service, please send an email to:
Line 231: Line 642:
   [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu]  
   [mailto:hpchelp@csi.cuny.edu hpchelp@csi.cuny.edu]  


===Additional Information===
== Warnings and modes of operation ==
 




1. '''hpchelp@csi.cuny.edu is for questions and accounts help communication only and does not accept tickets.''' For tickets please use  the ticketing system which ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response.
1. hpchelp@csi.cuny.edu is for questions and accounts help communication '''only''' and does not accept tickets unless ticketing system is not operational. For tickets please use  the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite  often even same day response. During the weekend you may not get any response.  


2. '''E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.''' Messages originated from public mailers (google, hotmail, etc) are filtered out.
2. '''E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address.''' Messages originated from public mailers (google, hotmail, etc) are filtered out.


3. '''Do not send questions to individual CUNY-HPC Center staff members directly.'''  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.
3. '''Do not send questions to individual CUNY HPC Center staff members directly.'''  These will be returned to the sender with a polite request to submit a ticket or email the Helpline.  This applies to replies to initial questions as well.


The CUNY-HPCC staff members are focused on providing high quality support to its user community, but compared
The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared
to other Academic Research Computing Centers in the Country of similar size we operate with 90% less personnel. Because '''our staff is extremely lean p'''lease make full use of the tools that we have provided (especially
to other HPC Centers of similar size '''our staff is extremely lean'''.  Please make full use of the tools that we have provided (especially
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using
the Wiki), and feel free to offer suggestions for improved service.  We hope and expect your experience in using
our systems will be predictably good and productive.
our systems will be predictably good and productive.
== User Manual ==
The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.

Latest revision as of 01:30, 10 December 2025

Hpcc-panorama3.png

The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. HPCC goals are to:

  • Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.
  • Create opportunities for the CUNY research community to develop new partnerships with the government and private sectors; and
  • Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.

Organization of systems and data storage (architecture)

All user data and project data are kept on Data Storage and Management System (DSMS) which is mounted only on login node(s) of all servers. Consequently, no jobs can be started directly from DSMS storage. Instead, all jobs must be submitted from a separate (fast but small) /scratch file system mounted on all computational nodes and on all login nodes. As the name suggests, the /scratch file system is not home directory for accounts nor can be used for long term data preservation. Users must use "staging" procedure described below to ensure preservation of their data, codes and parameters files. The figure below is a schematic of the environment.

Upon registering with HPCC every user will get 2 directories:

/scratch/<userid> – this is temporary workspace on the HPC systems
/global/u/<userid> – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data
• In some instances a user will also have use of disk space on the DSMS in /cunyZone/home/<projectid> (IRods).
HPCC structure.png

The /global/u/<userid> directory has quota (see below for details) while the /scratch/<userid> do not have. However the /scratch space is cleaned up following the rules described below. There are no guarantees of any kind that files in /scratch will be preserved during the hardware crashes or cleaning up. Access to all HPCC resources is provided by bastion host called 'chizen. The Data Transfer Node called Cea allows file transfer from/to remote sites directly to/from /global/u/<userid> or to/from /scratch/<userid>

HPC systems

The HPC Center operates variety of architectures in order to support complex and demanding workflows. All computational resources of different types are united into single hybrid cluster called Arrow. The latter deploys symmetric multiprocessor (also referred as SMP) nodes with and without GPU, distributed shared memory (NUMA) node, fat (large memory) nodes and advanced SMP nodes with multiple GPU. The number of GPU per node varies between 2 and 8 as well as employed GPU interface and GPU family. Thus the basic GPU nodes hold two Tesla K20m (plugged through PCIe interface) while the most advanced ones support eight Ampere A100 GPU connected via SXM interface.

Overview of Computational architectures:

SMP servers have several processors (working under a single operating system) which "share everything". Thus all cpu-cores allocate a common memory block via shared bus or data path. SMP servers support all combinations of memory VS cpu (up to the limits of the particular computer). The SMP servers are commonly used to run sequential or thread parallel (e.g. OpenMP) jobs and they may have or may not have GPU.

Cluster is defined as a single system comprising set of servers interconnected with high performance network. Specific software coordinates programs on and/or across those in order to perform computationally intensive tasks. The most common cluster type is the one that consists of several identical SMP servers connected via fast interconnect. Each SMP member of the cluster is called a node. All nodes run independent copies of the same operating system (OS). Some or all of the nodes may incorporate GPU.

Hybrid clusters combine nodes of different architectures. For instance the main CUNY-HPCC machine is a hybrid cluster called Arrow. Sixty two (62) of its nodes are identical GPU enabled SMP servers each with 2 x GPU K20m, 3 are SMP but with extended memory (fat nodes), one node is distributed shared memory node (NUMA, see below) and 2 are fat SMP servers especially designed to support 8 NVIDIA GPU per node. The latter are connected via SXM interface. In addition HPCC operates the cluster Herbert dedicated only to education.

Distributed shared memory computer is tightly coupled server in which the memory is physically distributed, but it is logically unified as a single block. The system resembles SMP, but the number of cpu cores and the amounts of memory possible is far beyond limitations of the SMP. Because the memory is distributed, the access times across address space are non-uniform. Thus, this architecture is called Non Uniform Memory Access (NUMA) architecture. Similarly to SMP, the NUMA systems are typically used for applications such as data mining and decision support system in which processing can be parceled out to a number of processors that collectively work on a common data. HPCC operates the NUMA node at Arrow named Appel. This node does not have GPU.

Infrastructure systems:

o Master Head Node (MHN/Arrow) is a redundant login node from which all jobs on all servers start. This server is not directly accessible from outside CSI campus. Note that name of main server and its login nodes are the same Arrow. Thus users can access the Arrow login nodes using name Arrow or MHN.

o Chizen is a redundant gateway server which provides access to protected HPCC domain.

o Cea is a file transfer node allowing transfer of files between users’ computers to/from /scratch space or to/from /global/u/<usarid>. Cea is accessible directly (not only via Chizen), but allows only limited set of shell commands.

Table 1 below provides a quick summary of the attributes of each of the sub clusters of the main HPC Center called Arow.

Master Head Node Sub System Tier Type Type of Jobs Nodes CPU Cores GPUs Mem/node Mem/core Chip Type GPU Type and Interface
Arrow Penzias Advanced Hybrid Cluster Sequential & Parallel jobs w/wo GPU 66 16 2 64 GB 4 GB SB, EP 2.20 GHz K20m GPU, PCIe v2
Sequential & Parallel jobs 1 24 - 1500 GB 62 GB HL, 2.30 GHz -
36 - 768 GB 21 GB -
24 - 768 GB 32 GB -
Appel NUMA Massive Parallel, sequential, OpenMP 1 384 - 11 TB 28 GB IB, 3 GHz -
Cryo SMP Sequential and Parallel jobs, with GPU 1 40 8 1500 GB 37 GB SL, 2.40 GHz V100 (32GB) GPU, SXM
Blue Moon Hybrid Cluster Sequential and Parallel jobs w/wo GPU 24 32 - 192 GB 6 GB SL, 2.10 GHz -
2 32 2 V100(16GB) GPU, PCIe
Karle SMP Visualization, MATLAB/Mathematica 1 36* - 768 GB 21 GB HL, 2.30 GHz -
Chizen Gateway No jobs allowed -
CFD Condo SMP Parallel, Seq, OpenMP 1 48 2 768 GB EM, 4.8 GHz A40, PCIe, v4
1 48 - 512 GB ER, 4.3 GHz -
PHYS Condo SMP 1 48 2 640 GB ER, 4 GHz L40, PCIe, v4
1 48 - 512 GB ER, 4.3 GHz -
CHEM Condo SMP 1 48 2 256 GB EM, 2.8 GHz A30, PCIe, v4
1 128 8 512 GB ER, 2.0 GHz A100/40, SXM
ASRC Condo SMP 1 48 2 256 GB ER, 2.8 GHz A30, PCIe, v4

Note: SB = Intel(R) Sandy Bridge, HL = Intel (R) Haswell, IB = Intel (R) Ivy Bridge, SL = Intel (R) Xeon(R) Gold, ER = AMD(R) EPYC ROMA, EM = AMD(R) EPYC MILAN, EG = AMD (R) EPYC GENOA

Recovery of operational costs

CUNY-HPCC is not for profit core research facility at CUNY. Our mission is to support all types of research that require advanced computational resources. CUNY-HPCC operations are not for profit and are NOT directly or indirectly sponsored by CUNY or College of Staten Island (CSI). Consequently CUNY-HPCC applies cost recovery model recapturing only operational costs with no profit for HPCC. The recovered costs are calculated using actual documented operational expenses and are break even for all CUNY users. The used methodology is approved by CUNY-RF methodology used in other CUNY research facilities. The costs are reviewed and consequently updated twice a year. The cost recovery charging schema is based on unit-hour. The unit can be either CPU unit or GPU unit. The definitions of these is given in a table below:

Definitions of unit-hour
Type of resource Unit-hour For V100, A30, A40 or L40 For A100
CPU unit 1 cpu core/hour -- --
GPU unit (4 cpu cores + 1 GPU thread )/hour 4 cpu cores + 1 GPU 4 cpu cores and 1/7 A100

HPCC access plans

a.     Minimum access (MAP):

Minimum access is designed to provide wide support for research activities in any college, to promote collaboration between colleges, to help establish a new research project, and/or to be testbed for new studies. MAP accounts operate under strict fair share policy so actual waiting time for a job in a que depends on resources used by that account in previous cycles. In addition all jobs have strict time limitations. Therefore long jobs must use check-points.

The MAP has 3 tiers:

·     A: Basic tier fee is $5000 per year. It is designed to provide support for users from colleges with low level of research activities. The fee covers infrastructure expenses associated with 1-2 users from these colleges.

·     B:  Medium tier fee is $15,000 per year. The fee covers infrastructure expenses of up to 12 users from these colleges. In addition, every account under medium tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.

·      C: Advanced tier is $25,000 per year. The fee covers infrastructure expenses for all users from these colleges. In addition every new account from this tier gets free 11520 CPU hours and free 1440 GPU hours upon opening.  

The MAP users get charged per CPU/GPU hour at low rate of $0.015 per cpu hour and $0.09 per GPU hour.

Cost recovery fees for MAP users
Job Cpu cores GPU Cost/hour
1 core no GPU 1 0 $0.015/hour
16 cores no GPU 4 0 $0.24/hour
 4 cores + 1 GPU 4 1 $0.15/hour
16 cores + 1 GPU 16 1   $0.33/hour
16 cores + 2 GPU 16 2   $0.42/hour
32 cores + 2 GPU 32 2   $0.66/hour
40 cores + 8 GPU 40 8 $1.32/hour


b.     Computing on demand (CODP)

Computing on demand plan (CODP) is open for all users from all CUNY colleges that do not participate in MAP plan, but want to use the HPCC resources. CODP accounts operate under strict fair share policy, so actual waiting time for a job in a que depends on resources previously used. In addition, all jobs have time limitations, so long jobs must use check-points. The users in CODP are charged for the time (CPU and GPU) per hour. The current rates are $0.018 per cpu hour and $0.11 per GPU hour.  In difference to MAP, the new CODP accounts does not come with free time. The invoices are generated and send to users (PI only) at the end of each month.  The examples in following table explain the fees structure:

Cost recovery fees for CODP plan
Job Cpu cores GPU Cost/hour
1 core no GPU 1 0 $0.018/hour
16 cores no GPU 16 0 $0.288/hour
4 cores + 1 GPU 4 1 $0.293/hour
16 cores + 1 GPU 16 1   $0.334/hour
32 cores + 1 GPU 32 1   $0.666/hour
32 cores + 2 GPU 32 2 $0.756/hour


c. Leasing node(s) (LNP)

Leasing node plan allows the users to lease the node(s) for the duration of the project. The minimum lease time is 30 days (one month), but leases of any length are possible. Discounts of 10% are given to users whose lease is longer than 90 days. Discounts cannot be combined.  In difference to MAP and CODP the LNP users do not compete for resources and have full access to rented resources 24/7.  

Lease node(s) fees for MAP users
Job (MAP users) Cpu cores GPU Cost/30 days
1 core no GPU 1 0 NA
16 cores no GPU 16 0 $172.80
32 cores no GPU 32 0 $264.96
16 cores + 2 GPU 16 1   $302.40
32 cores + 2 GPU 32 2 $475.20
40 cores + 8 GPU 40 8 $760.0
64 cores + 8 GPU 64 8 $950.40
Fees for lease a node(s) for non -MAP users
Job (non-MAP users) Cpu cores GPU Cost/month
1 core no GPU 1 0 NA
16 cores no GPU 16 0 $249.82
32 cores no GPU 32 0 $497.64
16 cores + 1 GPU 16 2 $443.23
32 cores + 2 GPU 32 2 $886.64
40 cores + 8 GPU 40 8 $1399.68


d.     Condo Ownership (COP)

Condo describes a model when user(s) own a node/server managed by HPCC. Only full time faculty can own condo node. Condo nodes are fully integrated into HPCC infrastructure. The owners pay only HPCC’s infrastructure support operational fee which includes only proportional part of licenses and materials need for day-to-day operations. The fees are reviewed twice a year and currently are $0.003 per CPU hour and $0.02 per GPU hour. Condo owners can “borrow”  (upon agreement) free of charge any node(s) from condo stack and can also lease (for higher fee – see below) their own nodes to non-condo users. The minimum let time is 30 days. The fees collected from non-condo users offset payments of the owner.

Condo owners costs per year
Type of condo node Cpu cores GPU Cost/year
Large hybrid SXM 128 8 $4518.92
Small hybrid 48 2 $1540.54
Medium compute 96 0 $2464.86
Large compute 128 0 $3286.49


Condo owners can contract their node(s) to other non-condo users. Renting period is unlimited with min. length of 30 days. The table below shows the payments the non-condo users recompense the condo owners. These fees are accumulated in owners account(s) and do offset the owner’s duties. Discount of 10% is applied for leases longer than 90 days.

Type of nodes and lease fees for condo nodes
Type of node Renters cost/month Long term (90+ days) rent cost/month CPU/node CPU type GPU/node GPU type GPU interface
Laghe Hybrid $602.52 $564.86 128 EPYC, 2.2 GHz 8 A100/80 SXM
Small Hybrid $205.41 $192.57 48 EPYC, 2.8 GHz 2 A40, A30, L40 PCIe v4
Medium Non GPU $328.65 $308.11 96 EPYC, 4.11GHz 48 None NA
Lagre Non GPU $438.20 $410.81 128 EPYC, 2.0 GHz 128 None NA

Free time

In order to establish a project all new users from colleges that participate in MAP (B and C only) plan are entitled to free 11520 CPU hours and 1440 GPU hours. Any additional hours are charged with MAP plan rates. Note that free time is per user account not per project so any user can have free time only once. External collaborators to CUNY are not normally eligible for free time. Please contact CUNY-HPCC director for further details.

Support for research grants

All proposals dated on Jan 1st 2026 ( 01/01/26 ) and later that require computational resources must include budget for cost recovery fees at CUNY-HPCC. For a project the PI can choose between:

  • lease the node(s), That is useful option for well defined projects and those with high computational component requiring 100% availability of the computational resource.
  • use "on-demand" resources. That is flexible option good for experimental projects or exploring new areas of study. The downgrade is that resources are shared among all users under fair share policy. Thus immediate access to resource cannot be guaranteed.
  • participate in CONDO tier. That is most beneficial option in terms of availability of resources and level of support. It fits best the focused research of group(s) (e.g. materials science).

In all cases the PI can use the appropriate rates listed above to establish correct budget for the proposal. PI should contact the Director of CUNY-HPCC Dr. Alexander Tzanov (alexander.tzanov@csi.cuny.edu) and discuss the project's computational requirements including optimal and most economical computational workflows, suitable hardware, shared or own resources, CUNY-HPCC support options and any other matter concerning correct and optimal computational budget for the proposal.    

Partitions and jobs

The only way to submit job(s) to HPCC servers is through SLURM batch system. Any job despite of its type (interactive, batch, serial, parallel etc.) must be submitted via SLURM. The latter allocates the requested resources on proper server and starts the job(s) according to predefined strict fair share policy. Computational resources (cpu-cores, memory, GPU) are organized in partitions. The table below describes the partitions and their limitations. The users are granted permissions house one or other partition and corresponding QOS key. The table below shows the limitations of the partitions (in progress).

Partition Max cores/job Max jobs/user Total cores/group Time limits Tier GPU types
partnsf 128 50 256 240 Hours Advanced K20m, V100/16, A100/40
partchem 128 50 256 No limit Condo A100/80, A30
partcfd 96 50 96 No limit Condo A40
partsym 96 50 96 No limit Condo A30
partasrc 48 16 16 No limit Condo A30
partmatlabD 128 50 256 240 Hours Advanced V100/16,A100/40
partmatlabN 384 50 384 240 Hours Advanced None
partphys 96 50 96 No limit Condo L40
  • partnsf is the main partition with assigned resources across all sub-servers. Users may submit sequential, thread parallel or distributed parallel jobs with or without GPU.
  • partchem is CONDO partition.
  • partphys is CONDO partition
  • partsym is CONDO partition
  • partasrc is CONDO partition
  • partmatlabD partition allows to run MATLAB's Distributes Parallel Server across main cluster.
  • partmatlabN partition to access large matlab node with 384 cores and 11 TB of shared memory. It is useful to run parallel Matlab jobs with Parallel ToolBox
  • partdev is dedicated to development. All HPCC users have access to this partition with assigned resources of one computational node with 16 cores, 64 GB of memory and 2 GPU (K20m). This partition has time limit of 4 hours.

Hours of Operation

In order to maximize the use of resources HPCC applies “rolling” maintenance scheme across all systems. When downtime is needed, HPCC will notify all users a week or more in advance (unless emergency situation occur). Typically, the fourth Tuesday mornings in the month from 8:00AM to 12PM is normally reserved (but not always used) for scheduled maintenance. Please plan accordingly. Unplanned maintenance to remedy system related problems may be scheduled as needed out of above mentioned days. Reasonable attempts will be made to inform users running on those systems when these needs arise. Note that users are strongly encouraged to use checkpoints in their jobs.

User Support

Users are strongly encouraged to read this Wiki carefully before submitting ticket(s) for help. In particular, the sections on compiling and running parallel programs, and the section on the SLURM batch queueing system will give you the essential knowledge needed to use the CUNY HPCC systems. We have strived to maintain the most uniform user applications environment possible across the Center's systems to ease the transfer of applications and run scripts among them.

The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY community in parallel programming techniques, HPC computing architecture, and the essentials of using our systems. Please follow our mailings on the subject and feel free to inquire about such courses. We regularly schedule training visits and classes at the various CUNY campuses. Please let us know if such a training visit is of interest. In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center, the SLURM queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc. Staff has also presented guest lectures at formal classes throughout the CUNY campuses.

If you have problems accessing your account and cannot login to the ticketing service, please send an email to:

 hpchelp@csi.cuny.edu 

Warnings and modes of operation

1. hpchelp@csi.cuny.edu is for questions and accounts help communication only and does not accept tickets unless ticketing system is not operational. For tickets please use the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a 48h response, quite often even same day response. During the weekend you may not get any response.

2. E-mails to hpchelp@csi.cuny.edu must have a valid CUNY e-mail as reply address. Messages originated from public mailers (google, hotmail, etc) are filtered out.

3. Do not send questions to individual CUNY HPC Center staff members directly. These will be returned to the sender with a polite request to submit a ticket or email the Helpline. This applies to replies to initial questions as well.

The CUNY HPC Center staff members are focused on providing high quality support to its user community, but compared to other HPC Centers of similar size our staff is extremely lean. Please make full use of the tools that we have provided (especially the Wiki), and feel free to offer suggestions for improved service. We hope and expect your experience in using our systems will be predictably good and productive.

User Manual

The old version of the user manual provides PBS not SLURM batch scripts as examples. Currently CUNY-HPCC uses SLURM scheduler so users must check and use only the updated brief SLURM manual distributed with new accounts or ask CUNY-HPCC for a copy of the latter.