Recent Posts 
Running Singularity containers under Slurm
Containers: Containers are a way of using pre-built images that contain a particular application. Unlike virtual machines, they rely on on an underlying system to run them. On the plus side, they consume fewer resources than complete virtual machines. The most common container system is “docker”. There are thousands of docker images available to d… Read More ›
Moving from Grid Engine to Slurm
Grid Engine to Slurm Helper Program To run, change into the directory that contains your grid engine script and type $ convert-sge-to-slurm scriptToConvert where, scriptToConvert is your grid engine script. As output, you will see a new script scriptToConvert.sbatch which puts the equivalent slurm command on the line below each grid engine command. Edit this… Read More ›
SCRC Changes – Summer 2021
Overview The center is about to undergo major changes in the next few months due to the upcoming move of all of the NYU HPC (which houses our equipment) facilities to a new location in New Jersey. As part of this move, we are turning off much of our old equipment, storage , servers and… Read More ›
Slurm Batch Example – matlab
We will soon turn of Sun Grid Engine and switch to slurm. The example below shows a script example that you can use to submit a slurm job with the sbatch command. #!/bin/bash # # mymatjob.sbatch # # Sample shell script to run a matlab job under slurm. \ # # use sbatch mymatjob.sbatch to… Read More ›
Grid Engine replaced by SLURM
The Stern Research Computing grid scheduling software, Grid Engine, will soon be discontinued and replaced by SLURM. SLURM has many more capabilities then SGE (in particular better support for GPUs), and is the HPC scheduling software used by the large university cluster, GREENE.HPC.NYU.EDU. This will make it much easier for our users to move large… Read More ›
NYU HPC facilities – major upgrade -Fall 2020
The university HPC facilities have moved to a new data center which houses a a very large HPC cluster, as well as an updated hadoop cluster. The new HPC cluster has 30,000 cores of processing, as well as a large number of GPUs. In addition, it will connect to a new high-speed research backbone which… Read More ›