The link below will take you to a folder with many help docs on different topics. HELP FOLDER of Google docs on many topics.
One of the advantages of slurm is the ability to schedule an interactive session, just as you might submit a batch job. Slurm will find a free node with the lightest load, and start your session on it. This will allow you to run GUI versions of the software, as well as requesting a node… Read More ›
Grid Engine to Slurm Helper Program To run, change into the directory that contains your grid engine script and type $ convert-sge-to-slurm scriptToConvert where, scriptToConvert is your grid engine script. As output, you will see a new script scriptToConvert.sbatch which puts the equivalent slurm command on the line below each grid engine command. Edit this… Read More ›
One of the advantages of Slurm is the ability to run a large interactive job without having to specify the machine to run it on. To do this, you use the “srun” command. For instance, Typing the following on a slurm submit node (rnd, vleda, …) srun –mem=12G –time=2:00:00 –pty /bin/bash will log you into… Read More ›
Overview The center is about to undergo major changes in the next few months due to the upcoming move of all of the NYU HPC (which houses our equipment) facilities to a new location in New Jersey. As part of this move, we are turning off much of our old equipment, storage , servers and… Read More ›
We will soon turn of Sun Grid Engine and switch to slurm. The example below shows a script example that you can use to submit a slurm job with the sbatch command. #!/bin/bash # # mymatjob.sbatch # # Sample shell script to run a matlab job under slurm. \ # # use sbatch mymatjob.sbatch to… Read More ›
The Stern Research Computing grid scheduling software, Grid Engine, will soon be discontinued and replaced by SLURM. SLURM has many more capabilities then SGE (in particular better support for GPUs), and is the HPC scheduling software used by the large university cluster, GREENE.HPC.NYU.EDU. This will make it much easier for our users to move large… Read More ›
The university HPC facilities have moved to a new data center which houses a a very large HPC cluster, as well as an updated hadoop cluster. The new HPC cluster has 30,000 cores of processing, as well as a large number of GPUs. In addition, it will connect to a new high-speed research backbone which… Read More ›