Training & Workshops: Difference between revisions
No edit summary |
No edit summary |
||
| Line 1: | Line 1: | ||
The CUNY HPCC provides training course and organizes seminars on various HPC topics. The training courses are provided at no cost and may be held at any CUNY campus site, at the CUNY HPCC at College of Staten Island, or at the Graduate Center. The training course at the Graduate Center and its '''online''' version | The CUNY HPCC provides training course and organizes seminars on various HPC topics. The training courses are provided at no cost and may be held at any CUNY campus site, at the CUNY HPCC at College of Staten Island, or at the Graduate Center. The training course at the Graduate Center and its '''online''' version is course on parallel programming and use of HPC architectures. The on-site course takes place if enough students express interest. It covers various topics from basic SLURM scripting to basic GPU programming to intermediate parallel programming with use of MPI and OpenACC. Please note that lectures cover each topic systematically so particular topic may be discussed in several lectures. Users who do want to attend the course should send an e-mail to [[mailto:hpchelp@.csi.cuny.edu hpchelp@csi.cuny.edu]] and ask for registration. All participants will get student account on CUNY-HPCC servers unless they already have one. | ||
In addition HPCC provides in person | In addition HPCC provides in person and zoom consultations with individuals or small groups of users every Wednesday - 11AM to 3 PM. The interested users should register by sending e-mail to alex.tzanov@csi.cuny.edu by Mon on the same week. These consultations should help new users and those with no experience to start quickly with HPCC resources. At that time users may discuss their particular problems and get guidance in development of their own parallel scientific code(s). Please send a mail to [[mailto:hpchelp@.csi.cuny.edu hpchelp@csi.cuny.edu]] or to [[mailto:alexander.tzanov@.csi.cuny.edu]] for available time slots not later than 3PM on Mon. HPCC will make all efforts to accommodate all users so any time slot may be shared by several users. | ||
For any additional information, please send an email to [[mailto:hpchelp@.csi.cuny.edu hpchelp@csi.cuny.edu]]. | For any additional information, please send an email to [[mailto:hpchelp@.csi.cuny.edu hpchelp@csi.cuny.edu]]. | ||
| Line 31: | Line 31: | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
<td width="72" valign="top"> | <td width="72" valign="top"></td> | ||
<td width="48" valign="top"><p align="center">WED</p></td> | <td width="48" valign="top"><p align="center">WED</p></td> | ||
<td width="303" valign="top"><p align="center">Introduction to HPC and HPCC</p></td> | <td width="303" valign="top"><p align="center">Introduction to HPC and HPCC</p></td> | ||
| Line 37: | Line 37: | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
<td width="72" valign="top"> | <td width="72" valign="top"></td> | ||
<td width="48" valign="top"><p align="center">WED</p></td> | <td width="48" valign="top"><p align="center">WED</p></td> | ||
<td width="303" valign="top"><p align="center">Introduction to parallel programming </p></td> | <td width="303" valign="top"><p align="center">Introduction to parallel programming </p></td> | ||
| Line 43: | Line 43: | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
<td width="72" valign="top"> | <td width="72" valign="top"></td> | ||
<td width="48" valign="top"><p align="center">WED</p></td> | <td width="48" valign="top"><p align="center">WED</p></td> | ||
<td width="303" valign="top"><p align="center">Distributed parallel programming with MPI part 1</p></td> | <td width="303" valign="top"><p align="center">Distributed parallel programming with MPI part 1</p></td> | ||
| Line 49: | Line 49: | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
<td width="72" valign="top"> | <td width="72" valign="top"></td> | ||
<td width="48" valign="top"><p align="center">WED</p></td> | <td width="48" valign="top"><p align="center">WED</p></td> | ||
<td width="303" valign="top"><p align="center">Distributed Parallel programming with MPI part 2</p></td> | <td width="303" valign="top"><p align="center">Distributed Parallel programming with MPI part 2</p></td> | ||
| Line 55: | Line 55: | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
<td width="72" valign="top"> | <td width="72" valign="top"></td> | ||
<td width="48" valign="top"><p align="center">WED</p></td> | <td width="48" valign="top"><p align="center">WED</p></td> | ||
<td width="303" valign="top"><p align="center">Distributed Parallel programming with MPI part 3</p></td> | <td width="303" valign="top"><p align="center">Distributed Parallel programming with MPI part 3</p></td> | ||
| Line 61: | Line 61: | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
<td width="72" valign="top"> | <td width="72" valign="top"></td> | ||
<td width="48" valign="top"><p align="center">WED</p></td> | <td width="48" valign="top"><p align="center">WED</p></td> | ||
<td width="303" valign="top"><p align="center">Distributed Parallel programming with MPI part 4</p></td> | <td width="303" valign="top"><p align="center">Distributed Parallel programming with MPI part 4</p></td> | ||
| Line 67: | Line 67: | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
<td width="72" valign="top"> | <td width="72" valign="top"></td> | ||
<td width="48" valign="top"><p align="center">WED</p></td> | <td width="48" valign="top"><p align="center">WED</p></td> | ||
<td width="303" valign="top"><p align="center">Distributed Parallel programming with MPI part 5</p></td> | <td width="303" valign="top"><p align="center">Distributed Parallel programming with MPI part 5</p></td> | ||
| Line 73: | Line 73: | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
<td width="72" valign="top"> | <td width="72" valign="top"></td> | ||
<td width="48" valign="top"><p align="center">WED</p></td> | <td width="48" valign="top"><p align="center">WED</p></td> | ||
<td width="303" valign="top"><p align="center"> Distributes Parallel programming with MPI – Hands on </p></td> | <td width="303" valign="top"><p align="center"> Distributes Parallel programming with MPI – Hands on </p></td> | ||
| Line 79: | Line 79: | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
<td width="72" valign="top"> | <td width="72" valign="top"></td> | ||
<td width="48" valign="top"><p align="center">WED</p></td> | <td width="48" valign="top"><p align="center">WED</p></td> | ||
<td width="303" valign="top"><p align="center">GPGPU programming part 1.</p></td> | <td width="303" valign="top"><p align="center">GPGPU programming part 1.</p></td> | ||
| Line 85: | Line 85: | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
<td width="72" valign="top"> | <td width="72" valign="top"></td> | ||
<td width="48" valign="top"><p align="center">WED</p></td> | <td width="48" valign="top"><p align="center">WED</p></td> | ||
<td width="303" valign="top"><p align="center">GPGPU programming part 2</p></td> | <td width="303" valign="top"><p align="center">GPGPU programming part 2</p></td> | ||
| Line 91: | Line 91: | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
<td width="72" valign="top"> | <td width="72" valign="top"></td> | ||
<td width="48" valign="top"><p align="center">WED</p></td> | <td width="48" valign="top"><p align="center">WED</p></td> | ||
<td width="303" valign="top"><p align="center">GPGPU - hands on </p></td> | <td width="303" valign="top"><p align="center">GPGPU - hands on </p></td> | ||
| Line 97: | Line 97: | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
<td width="72" valign="top"> | <td width="72" valign="top"></td> | ||
<td width="48" valign="top"><p align="center">WED</p></td> | <td width="48" valign="top"><p align="center">WED</p></td> | ||
<td width="303" valign="top"><p align="center">Easy GPU programming with OpenACC part 1. </p></td> | <td width="303" valign="top"><p align="center">Easy GPU programming with OpenACC part 1. </p></td> | ||
| Line 103: | Line 103: | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
<td width="72" valign="top"> | <td width="72" valign="top"></td> | ||
</td> | |||
<td width="48" valign="top"><p align="center">WED</p></td> | <td width="48" valign="top"><p align="center">WED</p></td> | ||
<td width="303" valign="top"><p align="center">Easy GPU programming with OpenACC part 2 </p></td> | <td width="303" valign="top"><p align="center">Easy GPU programming with OpenACC part 2 </p></td> | ||
Revision as of 19:18, 23 March 2026
The CUNY HPCC provides training course and organizes seminars on various HPC topics. The training courses are provided at no cost and may be held at any CUNY campus site, at the CUNY HPCC at College of Staten Island, or at the Graduate Center. The training course at the Graduate Center and its online version is course on parallel programming and use of HPC architectures. The on-site course takes place if enough students express interest. It covers various topics from basic SLURM scripting to basic GPU programming to intermediate parallel programming with use of MPI and OpenACC. Please note that lectures cover each topic systematically so particular topic may be discussed in several lectures. Users who do want to attend the course should send an e-mail to [hpchelp@csi.cuny.edu] and ask for registration. All participants will get student account on CUNY-HPCC servers unless they already have one.
In addition HPCC provides in person and zoom consultations with individuals or small groups of users every Wednesday - 11AM to 3 PM. The interested users should register by sending e-mail to alex.tzanov@csi.cuny.edu by Mon on the same week. These consultations should help new users and those with no experience to start quickly with HPCC resources. At that time users may discuss their particular problems and get guidance in development of their own parallel scientific code(s). Please send a mail to [hpchelp@csi.cuny.edu] or to [[1]] for available time slots not later than 3PM on Mon. HPCC will make all efforts to accommodate all users so any time slot may be shared by several users.
For any additional information, please send an email to [hpchelp@csi.cuny.edu].
Schedule
CUNY High-performance Computing Center (HPCC) provides Help Desk/Consultation support and lectures at the Graduate Center on programming and using Unix-based HPC cluster systems.
Dr. Alex Tzanov will conduct the lectures and consulattions. Please see schedule below.
Date
|
Day |
Lecture | Consultation |
| Room 4434 10 AM - 12 AM | Room 4411, GC | ||
|
|
||
WED |
Introduction to HPC and HPCC |
1 PM - 5 PM | |
WED |
Introduction to parallel programming |
1 PM - 5 PM | |
WED |
Distributed parallel programming with MPI part 1 |
1 PM - 5 PM | |
WED |
Distributed Parallel programming with MPI part 2 |
1 PM - 5 PM | |
WED |
Distributed Parallel programming with MPI part 3 |
1 PM - 5 PM | |
WED |
Distributed Parallel programming with MPI part 4 |
1 PM - 5 PM | |
WED |
Distributed Parallel programming with MPI part 5 |
1 PM - 5 PM | |
WED |
Distributes Parallel programming with MPI – Hands on |
1 PM - 5 PM | |
WED |
GPGPU programming part 1. |
1 PM - 5 PM | |
WED |
GPGPU programming part 2 |
1 PM - 5 PM | |
WED |
GPGPU - hands on |
1 PM - 5 PM | |
WED |
Easy GPU programming with OpenACC part 1. |
1 PM - 5 PM | |
WED |
Easy GPU programming with OpenACC part 2 |
ONLINE |
Apart from that the HPCC center provides a short introductory course at Graduate center for new users. The course covers HPCC structure and workflow, HPC servers information, basic SLURM scripting, basic Linux and Unix commands, how to compile and run the program on HPCC servers and basics of data storage and management system. For more information please contact hpchelp@csi.cuny.edu.