Cluster 3

Jump to: navigation, search

Information about the cluster (specs, queues, flags, ...) can be found in the QB3 cluster wiki. It is password-protected and will be accessible once you have obtained an account.

Obtaining an account

The individual steps are also described in this document.

  • obtain a general QB3 kerberos account from one of the QB3@UCSF WLAN account facilitators.
  • once you have such an account, mail the information (user name, name of PI) to Joshua Baker-LePain (jlb at salilab dot org), the cluster administrator.

Setting up a run

Scripts from Peter Kolb are located in ~kolb/Scripts/Cluster. This section is actually obsolete, and will be re-written in summer 2014. It is left here in the spirit that it may be useful.

  • copy files with
  • unzip them with
  • it is possible to use the standard docking scripts (after copying them over).
  • major difference: more than one queue. The queue is selected based on the CPU time requirements.
#$ -l mem_free=1G                  #-- submits on nodes with enough free memory
#$ -l  arch=lx24-amd64             #-- SGE resources (CPU type)
#$ -l panqb3=1G,scratch=1G         #-- SGE resources (home and scratch disks)
#$ -l h_rt=24:00:00                #-- runtime limit (see above; this requests 24 hours)
  • additionally one can make use of the /scratch partition which is available on most of the nodes.
  • ZINC is visible through at /bks/raid6, so you don't have to copy db files over.

Crossmounts and shared UID/GID space

The following disks are exported from Cluster 2 and are visible on the QB3 shared cluster:


  • /nfs/db as /bks/db


  • /nfs/work as /bks/work
  • /nfs/store as /bks/store
  • /nfs/home as /bks/home

Please note that /nfs/soft and /nfs/scratch are NOT available on the QB3 shared cluster.

Installed software

Our software is installed under user jji on the QB3 cluster. You can copy .cshrc (.bashrc) from jji to get started.

Queuing jobs

The QB3 cluster uses the same queuing system as Cluster 2 and Cluster 0. However, the queuing systems are completely separate. To repeat, the disks are shared, but the jobs are managed completely separately. This is for clarity, and we think it is both easy to use and logical. If you believe it would work better another way, let us know!