Forbidden Practices
In order to provide secure and efficient operation of the HPC Center resources for all users, certain practices are discouraged and/or forbidden on HPC Center systems.
Allowing Someone Else to Login and Use Your Account:
The CUNY IT security policy forbids this. Users are assumed to be responsible for all the usage and activities on account in their name. Providing access to another party undermines this assumption and puts the account owner at risk for the unsupervised actions of another party. Users violating this policy may lose their accounts and/or be denied access to their files.
Running Long Compute, Memory, and/or IO Intensive Processes on System Login Nodes:
The Login Node resources are not intended to be used for computation beyond that required to compile, link, and organize work for batch job submission to the compute nodes. It is the Compute Nodes where computationally intensive work is expected to be run. There may be occasions when a user anticipates that a required activity on the Login Node will consume more than a typical small fraction of its resources (a large file transfer for instance). HPC Center staff should be informed through 'hpchelp@csi.cuny.edu' in advance of such activity. In general, users running processes that consume large fractions of the Login Node's compute or other resources will have those processes killed. Repeat offenders may have their accounts closed temporarily or even permanently.
Running Monitoring Scripts or Process in Tight Loops that Fill Login Node Logfiles:
While it is natural for users to wish to keep track of their jobs and the availability of resources on HPC Center systems, doing so by running iterative processes or scripts that can potentially fill up system log files is forbidden. An example of this would be running 'watch' along with 'qstat' in a tight one second loop for an extended period of time. This fills up log file space in system directories. Such processes will be killed.
Leaving Your Application or Executable's Unix Output and Error Files Undefined in a SLURM Script:
While programs that produce small output files are not a problem, one cannot anticipate what might be printed to your Unix standard error file in the event of a problem with your application. When left undefined in a SLURM script, Unix standard output and error are written to the smallish system partition where SLURM is data is stored. When this becomes 100% filled, SLURM becomes non-responsive to new job submissions. Please always include Unix redirection at the end of the executable line(s) in your SLURM scripts similar to the following:
. . . SLURM script . . mpirun -np 4 mbbest ./bglobin.nex > best_mpi.out 2>&1 . .
Here the string " > best_mpi.out 2>&1 " names an output file (1, or best_mpi.out here) and merges it with the Unix standard error file (2). These files will then be written into the working directory for your job rather than the SLURM data directory. This location will typically have much more disk space.