Quest and Kellogg Linux Cluster Downtime, December 14 - 18.
Quest, including the Quest Analytics Nodes, the Genomics Compute Cluster (GCC), the Kellogg Linux Cluster (KLC), and Quest OnDemand, will be unavailable for scheduled maintenance starting at 8 A.M. on Saturday, December 14, and ending approximately at 5 P.M. on Wednesday, December 18. During the maintenance window, you will not be able to login to Quest, Quest Analytics Nodes, the GCC, KLC, or Quest OnDemand submit new jobs, run jobs, or access files stored on Quest in any way including Globus. For details on this maintenance, please see the Status of University IT Services page.
Quest RHEL8 Pilot Environment - November 18.
Starting November 18, all Quest users are invited to test and run their workflows in a RHEL8 pilot environment to prepare for Quest moving completely to RHEL8 in March 2025. We invite researchers to provide us with feedback during the pilot by contacting the Research Computing and Data Services team at quest-help@northwestern.edu. The pilot environment will consist of 24 H100 GPU nodes and seventy-two CPU nodes, and it will expand with additional nodes through March 2025. Details on how to access this pilot environment will be published in a KB article on November 18.
All Genomics researchers at Northwestern are welcome to join the Genomics Compute Cluster.
About the Genomics Compute Cluster on Quest
Northwestern IT provides the Genomics Compute Cluster (GCC), a 6,500-core allocation on Quest, consisting of 113 compute nodes, 3 high-memory nodes, 2 GPU nodes, and 455 TB of dedicated scratch space. The GCC can be used to align sequencing data, run RNA-Seq, ChIP-seq, single-cell analysis, whole genome analysis, custom code, utilize GPUs for machine learning and model training, and more.
All new GCC users are encouraged to watch the Genomics Compute Cluster orientation video.
How do I start using the Genomics Compute Cluster?
After your application to join the GCC has been approved, log in to Quest.
Moving files onto Quest
For information about transferring files from your local machine, RDSS, FSMResFiles, OneDrive, or Globus to Quest please visit the Transferring Files on Quest web page. Note that if you have sequencing done by NUSeq Core, they can deliver your FASTQ files to your directory in the genomics scratch directory /projects/b1042.
Using genomics software available on Quest
For a complete list of genomics software globally installed on Quest, see our Quest Software and Applications page and select the "genomics" tag.
If you would like to use software packages not currently installed on Quest, please see Installing Software on Quest.
Running jobs on the Genomics Compute Cluster: Submission Scripts
On Quest, the GCC is allocation b1042. In your job submission script, include these sbatch directives:
#SBATCH -A b1042
#SBATCH -p <appropriate GCC partition as described below>
To learn more about creating job submission scripts and submitting jobs, see Everything You Need to Know about Using Slurm on Quest.
Genomics Compute Cluster Partitions
Partitions are defined for different types of jobs and have different resource limits. Note that Partitions are also called queues.
Partitions for Feinberg and Weinberg Researchers:
- genomics: For jobs that require less than 48 hours to run. Researchers may submit multiple jobs to the genomics partition at one time and utilize many cores simultaneously. Jobs can request up to 256 GB of RAM, and 64 cores.
Script example: #SBATCH -p genomics
- genomicslong: For jobs that require between 48 - 240 hours to run. Priority is lower on this partition than the genomics partition to reserve more resources for shorter jobs. Jobs may request up to 235GB of RAM, and 64 cores.
Script example: #SBATCH -p genomicslong
- genomics-gpu: Submits jobs to the NVIDIA A100 gpu cards. See Using Genomics Compute Cluster GPUs on the GPUs on Quest page for instructions on how to submit jobs to these gpus.
Script example: #SBATCH -p genomics-gpu
- genomics-himem: Submits jobs to the 2TB high-memory nodes. Jobs may request up to 2TB of RAM, and 64 cores.
Script example: #SBATCH -p genomics-himem
- genomics-burst: For special projects, large jobs requiring more than 5 nodes or that run for more than 48 hours may use the genomics-burst partition by contacting quest-help@northwestern.edu. Before using genomics-burst partition, users must meet with a Research Computing Services consultant to confirm that code has been reviewed for efficiency. These appointments will be set up after reservations are requested and are intended to ensure best practices for this shared resource. Because of the resources involved, jobs may need to wait for availability when requesting the genomics-burst partition. It is advised to schedule at least three (3) weeks in advance.
Partitions for all other Researchers:
- genomicsguest: For jobs that require less than 48 hours to run, a lower-priority version of the genomics partition. Jobs can request up to 180GB of RAM, and 52 cores.
Script example: #SBATCH -p genomicsguest
- genomicsguestex: For jobs that require between 48 - 240 hours to run, a lower-priority version of the genomicslong partition. Jobs can request up to 180GB of RAM, and 52 cores.
Script example: #SBATCH -p genomicsguestex
- genomicsguest-gpu: Submits jobs to the NVIDIA A100 gpu cards. See Using Genomics Compute Cluster GPUs on the GPUs on Quest page for instructions on how to submit jobs to these gpus.
Script example: #SBATCH -p genomics-guest-gpu
- genomics-himem: Submits jobs to the 2TB high-memory nodes. Jobs may request up to 2TB of RAM, and 64 cores.
Script example: #SBATCH -p genomics-himem
For example job scripts see the GitHub repository of example jobs.
Reach out to quest-help@northwestern.edu with any questions.