Available queues

Queue NameAllowed submittorsRestrictions
batch (default)Math faculty/gradsMax 8GB memory (2GB default)
facultyMath facultyMax 16GB memory (4GB default)
shortMath faculty/gradsMax 8GB memory (2GB default) and 4 day walltime (2 day default)
bigmemMath faculty/gradsMax 50GB memory (8GB default)

Specifying memory requirements

The pmem parameter is used to specify the amount of virtual memory that a job will need. By default, most queues will have a reasonable limit already set for you. If you won't need more then this, you won't need to do anything extra. If the default is not appropriate for your job though, you may have to specify a different amount.

bigmem is a queue for really large memory jobs. Only one such job can run at a time so your jobs may take a while before it is able to run. To use the bigmem queue you will need to have the following in your .pbs file.

#PBS -q bigmem
#PBS -l nodes=1:ppn=1,pvmem=16gb

PBS file parameters

Common qsub Torque commands for submitting jobs to the cluster. These can be added to your PBS file by adding the prefix #PBS before each parameter listed. Normally you do not need to use all of these.

Torque ParameterDescription
-VExport all enviroment variables to the job.
-N jobnameSet the name for your job.
-o outfileSet the output file name for your job.
-e errorfileSet the error file name for your job.
-q queue_nameSpecify the queue to submit your job to. batch is the defaut queue.
-l nodes=#:ppn=#Request the number of cpu cores you need for your job. The total number of cpu cores is (num nodes) x (processors per node). We have a maximum of 8 cpu cores available on any single node.
-l walltime=HH:MM:SSMaximum amount of time to allow job to run. Some queues have a maximum amount of time allowed. Jobs exceeding their maximum runtime are terminated so don't set this too short.
-l pvmem=#mbMaximum amount of virutal memory to allow job to use in megabytes. Use gb to specify in gigabytes. Most queues have maximum or minimum memory requirements set.
-t [0-9,-]Request an array job. Argument can be a comma and/or hyphenated list of digits. Ex. -t 0-5,11,14-16 will run an array job with PBS_ARRAYID values 0,1,2,3,4,5,11,14,15,16.
-m beaSend mail to user when job begins (b), ends (e), and aborts (a). Used with the -M.
-M email_addressSpecify your email address.
-j oeMerge standard output and error streams into one.
-k oeForce real time output and error streams. One can view these with tail -f <filename>

Note: By default output stream is located in $HOME/<jobname>.o<jobnum> and error stream is located in $HOME/<jobname>.e<jobnum>.

See man pbs_resources for more details and examples.

Some example PBS files

Simple PBS file

#PBS -N Matlab_Job1

#PBS -S /bin/sh

#PBS -l nodes=1:ppn=1:matlab,walltime=10:00:00


#PBS -m bea

cd $HOME/matlab

matlab -nodisplay -r job1 >& job1.out

Note: Runs a Matlab job using job1.m with output from Mablab going to $HOME/matlab/job1.out. Job will run on a single processor core for 10 hours.

MPI Job PBS file

#PBS -N MPI_Job1

#PBS -S /bin/sh

#PBS -l nodes=3:ppn=1:walltime=01:00:00


#PBS -m bea

cd $HOME/mpi

export NP=`wc -l ${PBS_NODEFILE} | cut -d' ' -f1`

export MPDSNP=`uniq ${PBS_NODEFILE} |wc -l| cut -d' ' -f1`

cat ${PBS_NODEFILE} | uniq > /tmp/mpd_nodefile_${USER}_$$

export MPD_NODEFILE=/tmp/mpd_nodefile_${USER}_$$

mpdboot -v -n ${MPDSNP} -f ${MPD_NODEFILE}

mpdtrace -l

mpirun -np ${NP} myprogram


Note: MPI needs to know what nodes your job will be running on. This is pulled out of the $PBS_NODEFILE which is created dynamically by the Torque resource manager when your jobs runs. The above PBS file should run the program called myprogram on 3 nodes for 1 hour using a single processor core on each node. In some cases the scheduler may use processor cores on the same node (ignoring the nodes specification). The only workaround would be to specify a memory requirement in addition so that a node could only satisfy a single task.