How to Test Your Workflow on the New RHEL8 Operating System
Quest and Kellogg Linux Cluster Downtime, March 22 - 31.
Quest, including the Quest Analytics Nodes, the Genomics Compute Cluster (GCC), the Kellogg Linux Cluster (KLC), and Quest OnDemand, will be unavailable for scheduled maintenance starting at 8 A.M. on Saturday, March 22, and ending approximately at 5 P.M. on Monday, March 31. During the maintenance window, you will not be able to login to Quest, Quest Analytics Nodes, the GCC, KLC, or Quest OnDemand submit new jobs, run jobs, or access files stored on Quest in any way including Globus. For details on this maintenance, please see the Status of University IT Services page.
Quest RHEL8 Pilot Environment
The RHEL8 Pilot Environment is available for use now.
Ahead of the March 2025 Downtime, Quest users have the opportunity to test their software and research workflows on CPU nodes and NVIDIA H100 GPU nodes which are running the new RHEL8 OS. Detailed instructions are available on how to submit jobs for the new Operating System in the Knowledge Base article, RHEL8 Pilot Environment.
RHEL8 Pilot Quest log-in nodes can be access via ssh or FastX through using the hostname login.quest.northwestern.edu. Please note that the new hostname login.quest.northwestern.edu will require the GlobalProtect VPN when outside of the United States.
RHEL8 Pilot Quest Analytics nodes can be access via: rstudio.quest.northwestern.edu, jupyterhub.quest.northwestern.edu, and sasstudio.quest.northwestern.edu.
Table of Contents
What Is RHEL8 Pilot?
The RHEL8 test environment consists of 72 Quest10 (52 cores, 128GB RAM) CPU compute nodes, 70 Quest 13 (128 cores, 512GB RAM) CPU Nodes, and 24 Quest13 NVIDIA H100 GPU compute nodes all running RHEL8. The CPU compute nodes will be available for testing for users with either a Quest buy-in or General Access allocation. The GPU compute nodes will only be accessible to users with a General Access allocation.
What Am I Testing For?
Although the Research Computing and Data Services team has verified that much of the software on Quest will work on CPU and GPU nodes running RHEL8, we have many modules on Quest and users are allowed to install their own software and we cannot guarantee that all software will run without issues. To that end, we invite researchers to provide us with feedback during the pilot by contacting the Research Computing and Data Services team at quest-help@northwestern.edu with the Slurm Job ID of any jobs that failed to run due to software or environmental issues.
Please Note: If you are leveraging the H100 GPUs, then your application must have been compiled with a version of CUDA Toolkit between 11.8 and 12.4. We have instructions for installing popular GPU accelerated Python libraries in this manner on our Using GPUs on Quest KB article.
How to Access It?
Connecting with an SSH Terminal
Please note that your netid/username is case-sensitive. For instance, using ABC123 instead of abc123 will not work in any of the methods below.
Mac or Linux computer
To login to Quest from a Linux or Mac computer, open a terminal and at the command line enter:
ssh -X <netid>@login.quest.northwestern.edu
Example
ssh -X abc123@login.quest.northwestern.edu
Windows computer
Windows users will first need to download an ssh client, such as PuTTY, which will allow you to interact with the remote Unix command line. Use the following information to set up your connection:
Hostname : login.quest.northwestern.edu
Username : your Northwestern NetID
Password : your Northwestern NetID password
Please Note: you need to update your connection string to login.quest.northwestern.edu. After the March downtime, direct connection to nodes, such as qnodeXX.ci.northwestern.edu will no longer function.
FastX
Step 1: Download and Install FastX3
Windows:
- Download the FastX3 Client to your PC.
- Run the installer named FastX-3.X.XX-setup.exe.
Mac OS X :
- Download the FastX3 Client to your Mac.
- Open the disk image named FastX3-3.X.XX.dmg.
- Drag FastX3 into your Applications folder.
Linux :
- Download the FastX3 Client to your Linux.
- Open the tarball file named FastX3-3.X.XX.rhel7.x86_64.tar.gz.
- Extract FastX3 folder from the tarball to your home or Desktop folder.
Step 2: Run FastX3 for the First Time
Windows :
Use the Start Menu (Click Start Menu > All Programs > FastX3) or use the Start Screen to find the FastX3 application and Run it. When prompted, allow FastX3 to communicate through the firewall.
Mac OS X :
- In the Finder, open the Applications folder and locate FastX3.
- Press the Control-key and click the FastX3 icon.
- Choose Open from the pop-up menu.
- Click Open.
Linux :
- Open FastX3 folder and locate FastX3 executable file.
- Double-click FastX3 executable file.
Note: FastX3 will be stored as an exception to your security settings and you will subsequently be able to run it by double-clicking the FastX3 icon. You will have to do this each time you upgrade FastX3.
Step 3: Create a Connection
FastX3 uses SSH to connect to a Quest login node.
1. Click the PLUS icon in the upper left corner of the window.

2. An Edit Connection window will open. You must already have an account on Quest to login. Fill in the following fields:
- Host: login.quest.northwestern.edu
- User: Enter your own NetID in lower-case letters.
- Port: 22
- Name: Quest (Can be whatever you want though it is for your own reference)

Please Note: you need to update your connection string to login.quest.northwestern.edu. After the March downtime, direct connection to nodes, such as qnodeXX.ci.northwestern.edu will no longer function.
3. Click OK.
Connect to Quest
This section shows you how to create a connection to Quest each time you want to connect and how to manage your connections.
Step 1: Open a Connection to a Quest Login Node
1. With FastX3 open, double-click My Quest Connection in the Connections window.

2. A window will pop up. If this is the first time you have logged into Quest, a question like the one below will show up and will ask if you are sure that you want to continue, say yes. Every time after, a screen will pop up which only requests that you Enter your NetID password. Then click Continue.

FastX will open an SSH connection to Quest and open a Session Management window for Quest. You will have no sessions to begin with, so the window will appear to be empty.
Step 2: Start a Session
1. Click the PLUS icon in the upper left corner of the Session Management window to create a new Session.

2. A new window will appear with icons. We recommend selecting the GNOME icon from this menu. Below, we select GNOME and the Command box will change to start a single application named, gnome-session. Click OK.

3. Your GNOME Virtual Desktop window will open, and your Session Management window will change to show that you have a single session named, GNOME.

To launch a terminal inside of your GNOME virtual desktop, click Activities -> Terminal. This will give you a traditional GNOME Terminal session running a Bash shell; it provides full graphics support via X11. Type Linux commands in this window. Manage batch jobs. Launch programs with graphical interfaces. Edit your files with the evim GUI. Explore the menu bar items to learn more about the features of GNOME Terminal. GNOME Desktop is the recommended way to connect to Quest.
Step 3: Disconnecting and Reconnecting Sessions
Running sessions are listed in the FastX Session Management window. Sessions continue to run on the login host until you terminate them.
You can disconnect from a running session without terminating it by closing out of the application without clicking the "x" in the upper right of your gnome desktop icon.
This can be helpful if you get disconnected from your session and need to reconnect quickly. However, it should be noted that when the system migrates to login.quest.northwestern.edu it may not be possible to connect to the same session after a period of time. Users are encouraged to reach out to the Quest help team if they require persistent reconnection to longer running sessions.
GPU Compute Nodes
The GPU compute nodes will only be accessible to users with a General Access allocation.
Users with General Access Allocations Only
Batch Jobs
To participate in the RHEL8 pilot and to utilize the H100 GPU Nodes, users of the General Access partition gengpu will need to update the GPU request in their Slurm submission script from
#SBATCH --gres=gpu:a100:1
to
#SBATCH --gres=gpu:h100:1
Please see the summary table below for examples of modifying your script for these partitions.
Partition |
Original Submission Script |
Modified Submission Script |
gengpu |
#SBATCH --account=p12345
#SBATCH --partition=gengpu
#SBATCH --gres=gpu:a100:1
#SBATCH --time=HH:MM:SS
...
|
#SBATCH --account=p12345
#SBATCH --partition=gengpu
#SBATCH --gres=gpu:h100:1 ## modify this line
#SBATCH --constraint=rhel8 ## add this line
#SBATCH --time=HH:MM:SS
...
|
gengpu
(with sxm) |
#SBATCH --account=p12345
#SBATCH --partition=gengpu
#SBATCH --gres=gpu:a100:1
#SBATCH --constraint=sxm
#SBATCH --time=HH:MM:SS
...
|
#SBATCH --account=p12345
#SBATCH --partition=gengpu
#SBATCH --gres=gpu:h100:1 ## modify this line
#SBATCH --constraint=rhel8 ## modify this line
#SBATCH --time=HH:MM:SS
...
|
gengpu
(with pcie) |
#SBATCH --account=p12345
#SBATCH --partition=gengpu
#SBATCH --gres=gpu:a100:1
#SBATCH --constraint=pcie
#SBATCH --time=HH:MM:SS
...
|
#SBATCH --account=p12345
#SBATCH --partition=gengpu
#SBATCH --gres=gpu:h100:1 ## modify this line
#SBATCH --constraint=rhel8 ## modify this line
#SBATCH --time=HH:MM:SS
...
|
Interactive Jobs
The following salloc and srun commands start an interactive session on one of the RHEL8 H100 GPU nodes in the gengpu partition. Replace p12345 with the name of your general access allocation and update the other parameters (--time, --mem, --ntasks-per-node) as needed. In the salloc example, replace qgpu3001 with the name of the node you are allocated by salloc.
##srun example
srun --account=p12345 --partition=gengpu --gres=gpu:h100:1 --constraint=rhel8 --time=01:00:00 --mem=7GB --nodes=1 --ntasks-per-node=1 --pty bash -l
## salloc example
salloc --account=p30157 --partition=gengpu --gres=gpu:h100:1 --constraint=rhel8 --time=00:30:00 --mem=7G --nodes=1 --ntasks-per-node=1
ssh qgpu3001
Quest OnDemand
Similarly, users can access the RHEL8 GPU nodes as part of a Quest OnDemand session. Be sure to update the "GPU" parameter to an H100 option when launching the Quest OnDemand application.

In addition, update the "Constraint" parameter to choose the "rhel8" option.

CPU Compute Nodes
The CPU compute nodes will be available for testing for users with either a Quest buy-in or General Access allocation.
Users with General Access Allocations
Batch Jobs
To participate in the RHEL8 pilot and get matched to the first available RHEL8 CPU Node, users of the General Access partitions short, normal, and long, will need to either add the following line to their existing Slurm submission script.
#SBATCH --constraint=rhel8
or to participate in the RHEL8 pilot and get matched specifically to the latest CPU architecture offering on Quest (Emerald Rapids),
- Processor: Intel(R) Xeon(R) Platinum 8592+ CPU @ 1.9GHz
- Memory: Per node (Per Core) 512 GB (4 GB), Type: DDR5 5600 MHz
add the following line to your existing Slurm submission script.
#SBATCH --constraint=quest13
Please note that if you already have a --constraint
setting in your Slurm submission script, you must update it to the above as you cannot --constraint
set multiple times in your submission script. Please see the summary table below for examples of modifying your script for these partitions.
Partition |
Original Submission Script |
Modified Submission Script |
short |
#SBATCH --account=p12345
#SBATCH --partition=short
#SBATCH --time=HH:MM:SS
...
|
#SBATCH --account=p12345
#SBATCH --partition=short
#SBATCH --constraint=rhel8
#SBATCH --time=HH:MM:SS
...
|
normal |
#SBATCH --account=p12345
#SBATCH --partition=normal
#SBATCH --time=HH:MM:SS
...
|
#SBATCH --account=p12345
#SBATCH --partition=normal
#SBATCH --constraint=rhel8
#SBATCH --time=HH:MM:SS
...
|
long |
#SBATCH --account=p12345
#SBATCH --partition=long
#SBATCH --time=HH:MM:SS
...
|
#SBATCH --account=p12345
#SBATCH --partition=long
#SBATCH --constraint=rhel8
#SBATCH --time=HH:MM:SS
...
|
Interactive Jobs
The following salloc and srun commands start an interactive session on one of the RHEL8 nodes in the short, normal, and long partitions. Replace p12345 with the name of your general access allocation and update the other parameters (--time, --mem, --ntasks-per-node, --partition) as needed. In the salloc example, replace qnode0437 with the name of the node you are allocated by salloc.
##srun example
srun
--account=p12345 --partition=short --constraint=rhel8 --time=01:00:00 --mem=7G --nodes=1 --ntasks-per-node=1
--pty bash -l
## salloc example
salloc --account=p12345 --partition=short --constraint=rhel8 --time=01:00:00 --mem=7G --nodes=1 --ntasks-per-node=1
ssh qnode0437
Quest OnDemand
To access the RHEL8 CPU nodes via a Quest OnDemand session, be sure to update the "Constraint" parameter to "rhel8" when launching your OnDemand application.

Users with Buy-In
To participate in the RHEL8 pilot, users in buy-in allocations with access to compute resources will need modify the partition setting in their submission script from either
#SBATCH --partition=b12345
or
#SBATCH --partition=buyin
to
#SBATCH --partition=buyin-dev
Please see the summary table below for examples of modifying your script for these partitions.
Partition |
Original Submission Script |
Modified Submission Script |
b12345 |
#SBATCH --account=b12345
#SBATCH --partition=b12345
#SBATCH --time=HH:MM:SS
...
|
#SBATCH --account=b12345
#SBATCH --partition=buyin-dev
#SBATCH --time=HH:MM:SS
...
|
buyin |
#SBATCH --account=b12345
#SBATCH --partition=buyin
#SBATCH --time=HH:MM:SS
...
|
#SBATCH --account=b12345
#SBATCH --partition=buyin-dev
#SBATCH --time=HH:MM:SS
...
|
Quest Analytics Nodes
While connected to the Global Protect VPN or On-Campus WiFi, the Quest Analytics Node environment can be access via: rstudio.quest.northwestern.edu, jupyterhub.quest.northwestern.edu, and sasstudio.quest.northwestern.edu. This will connect you RStudio Server, JupyterHub, and SAS Studio, respectively.
New Compute Node Maximum Schedulable Memory
Node Family Name |
Number of CPUs |
Amount of Schedulable Memory/RAM |
Partitions with these Nodes |
quest10 |
52 |
166G (3.2G per core) |
short/normal/long/buyin |
quest11 |
64 |
221G (3.5G per core) |
short/normal/long/buyin |
quest12 |
64 |
221G (3.5G per core) |
short/normal/long/buyin |
quest13 |
128 |
473G (3.7G per core) |
short/normal/long/buyin |
Submit Feedback
We invite researchers to provide us with feedback during the pilot by contacting the Research Computing and Data Services team at quest-help@northwestern.edu with the Slurm Job ID of any jobs that failed to run due to software or environmental issues.