Overview of the CUNY HPC Center resources

Jump to: navigation, search



The City University of New York (CUNY) High Performance Computing Center (HPCC) is located on the campus of the College of Staten Island, 2800 Victory Boulevard, Staten Island, New York 10314. HPCC goals are to:

  • Support the scientific computing needs of CUNY faculty, their collaborators at other universities, and their public and private sector partners, and CUNY students and research staff.
  • Create opportunities for the CUNY research community to develop new partnerships with the government and private sectors; and
  • Leverage the HPC Center capabilities to acquire additional research resources for its faculty and graduate students in existing and major new programs.

Organization of systems and data storage (architecture)

The CUNY High-performance Computing (HPC) Center is “data and storage centric”, that is, it operates under the philosophy that “compute systems” are transient and will be periodically replaced, but research data is more permanent. Consequently, the environment is architected with a central file system, called the Data Storage and Management System (DSMS) with HPC systems attached to it. Figure 1 is a schematic of the environment. Here, the storage facilities, i.e., the DSMS, is in the center surrounded by the HPC systems. A HPC system can be added or removed from the system without affecting user data.

Access to all the HPC systems is through a “gateway system” called chizen.csi.cuny.edu. This means that you must first sign into “CHIZEN” using ssh and then onto the HPC system you wish to use.

User home directories, user data, and project data are kept on the DSMS.

Each HPC system has local “/scratch” disk space. /scratch space is workspace used by the computer when running jobs. Input data required for a job, temporary data or intermediary files, and output files created by a job can temporarily reside on /scratch, but have no permanence there.

When a user applies for and is granted an account, they are assigned a <userid> on the HPC systems and the following disk space:

/scratch/<userid> – this is temporary workspace on the HPC system
/global/u/<userid> – space for “home directory”, i.e., storage space on the DSMS for program, scripts, and data
• In some instances a user will also have use of disk space on the DSMS in /cunyZone/home/<projectid>.

Hpcc infra.jpg

The diagram below shows the DSMS global file system for the HPC Center. Chizen provides user access to any of the HPC systems. The Data Transfer Node allows users to transfer large files from remote sites directly to /global/u/<userid> or to /scratch on any of the HPC systems on which they have a /scratch/<userid>.

HPC systems

The HPC Center operates two different types of HPC systems: distributed memory (also referred to as “cluster”) computers and symmetric multiprocessor (also referred to as “shared-memory") computers (SMP). These systems are very different in architecture, programming models, and in the way, they are used. A brief description of the differences between “cluster” and SMP computers is provided in the sections below. Please note: Part of Andy is dedicated to Gaussian jobs only. In addition, some nodes are reserved for some users. That leaves Andy with 336 cores available for general use. Table 1 provides a quick summary of the attributes of each of the systems available at the HPC Center.

Updated hpccSystems.jpg UpdatedPenzias.png

Queue details

For each system there are multiple queues with various limits and configurations. Below is some introductory information.

File:Pen queue info.png

Hours of Operation

The second and fourth Tuesday mornings in the month from 8:00AM to 12PM are normally reserved (but not always used) for scheduled maintenance. Please plan accordingly.
Unplanned maintenance to remedy system related problems may be scheduled as needed. Reasonable attempts will be made to inform users running on those systems when these needs arise.

User Support

Users are encouraged to read this Wiki carefully. In particular, the sections on compiling and running parallel programs, and the section on the PBS Pro batch queueing system will give you the essential knowledge needed to use the CUNY HPC Center systems. We have strived to maintain the most uniform user applications environment possible across the Center's systems to ease the transfer of applications and run scripts among them. Still, there are some differences, particularly with the SGI (ANDY) and Cray (SALK) systems.

The CUNY HPC Center staff, along with outside vendors, offer regular courses and workshops to the CUNY community in parallel programming techniques, HPC computing architecture, and the essentials of using our systems. Please follow our mailings on the subject and feel free to inquire about such courses. We regularly schedule training visits and classes at the various CUNY campuses. Please let us know if such a training visit is of interest. In the past, topics have include an overview of parallel programming, GPU programming and architecture, using the evolutionary biology software at the HPC Center, the PBS queueing system at the CUNY HPC Center, Mixed GPU-MPI and OpenMP programming, etc. Staff has also presented guest lectures at formal classes throughout the CUNY campuses.

Users with further questions or requiring immediate assistance in use of the systems should create a ticket using their HPC account login at:


If you have problems accessing your account and cannot login to the ticketing service, please send an email to:


Please note that hpchelp@csi.cuny.edu is for questions and accounts help communication only and does not accept tickets. For tickets please use the ticketing system mentioned above. This ensures that the person on staff with the most appropriate skill set and job related responsibility will respond to your questions. During the business week you should expect a same-day response. During the weekend you may or may not get same-day response depending on what staff are reading email that weekend. Please send all technical and administrative questions (including replies) to this address.

Please do not send questions to individual CUNY HPC Center staff members directly. Send questions to the helpline: hpchelp@csi.cuny.edu

These will be returned to the sender with a polite request to submit a ticket or email the Helpline. This applies to replies to initial questions as well as those initial questions.

The CUNY HPC Center staff are focused on providing high quality support to its user community, but compared to other HPC Centers of similar size our staff is lean. Please make full use of the tools that we have provided (especially the Wiki), and feel free to offer suggestions for improved service. We hope and expect your experience in using our systems will be predictably good and productive.

User Manual

The User Manual can be downloaded at: http://cunyhpc.csi.cuny.edu/publications/User_Manual.pdf