Using VASP on Quest

Quest and Kellogg Linux Cluster Downtime, December 14 - 18.

Quest, including the Quest Analytics Nodes, the Genomics Compute Cluster (GCC), the Kellogg Linux Cluster (KLC), and Quest OnDemand, will be unavailable for scheduled maintenance starting at 8 A.M. on Saturday, December 14, and ending approximately at 5 P.M. on Wednesday, December 18. During the maintenance window, you will not be able to login to Quest, Quest Analytics Nodes, the GCC, KLC, or Quest OnDemand submit new jobs, run jobs, or access files stored on Quest in any way including Globus. For details on this maintenance, please see the Status of University IT Services page.

Quest RHEL8 Pilot Environment - November 18.

Starting November 18, all Quest users are invited to test and run their workflows in a RHEL8 pilot environment to prepare for Quest moving completely to RHEL8 in March 2025. We invite researchers to provide us with feedback during the pilot by contacting the Research Computing and Data Services team at quest-help@northwestern.edu. The pilot environment will consist of 24 H100 GPU nodes and seventy-two CPU nodes, and it will expand with additional nodes through March 2025. Details on how to access this pilot environment will be published in a KB article on November 18.

Requesting Access to Use VASP on Quest

Because VASP is a licensed software, we need to give users explicit access to the VASP modules on Quest. VASP provides Quest admins with access to a portal where we can use the e-mail associated with your VASP access/license to verify that you can use VASP on Quest. Please send an e-mail to quest-help@northwestern.edu stating both that you would like to use VASP on Quest and the e-mail address associated with your VASP access/license.

Please note that depending on your license, you may have access to either the VASP 5.X modules or the VASP 6.X modules or both.

In addition, even after you are granted access to VASP, you will continue to see the following warning message when you try to use one of the VASP modules.

$ module load vasp/5.4.4-openmpi-intel
WARNING: Because VASP is licensed software, we need to give users explicit access to the VASP modules on Quest. VASP provides Quest admins with access to a portal where we can use the e-mail associated with your VASP access/license to verify that you can use VASP on Quest. Please send an e-mail to quest-help@northwestern.edu stating both that you would like to use VASP on Quest and the e-mail associated with your VASP access/license. 

This message can be ignored once you have received confirmation from us concerning your access.

Please note that only the builds of VASP 6.X have CUDA support enabled.

Running VASP on Quest

Once you have received confirmation that you have been given access to use the VASP modules on Quest, you can start to run VASP simulations. Please find examples of running VASP 5.4.4 on Quest below. We provide two examples. The first is for the VASP 5.4.4 install that is available through the default set of modules. The second is for a new build of VASP (against newer OpenMPI and Intel Compilers) that is available through specifying an additional non-default module location.

Please note that this submission script is designed so that however many cores are requested from the scheduler via

#SBATCH --ntasks=<ncores>

will automatically translate to mpirun -np through the the environmental variable ${SLURM_NTASKS}.

Current Build
submit-vasp-5.4.4.sh
#!/bin/bash
#SBATCH --account=<pXXXXX>  ## YOUR ACCOUNT pXXXX or bXXXX
#SBATCH --partition=<partition>  ### PARTITION (buyin, short, normal, etc)
#SBATCH --job-name=vasp-openmpi-slurm
#SBATCH --ntasks=<ncores>
#SBATCH --time=01:00:00
#SBATCH --mem-per-cpu=2G
#SBATCH --constraint="[quest8|quest9|quest10|quest11]"

module purge all
module load vasp/5.4.4-openmpi-4.0.5-intel-19.0.5.281
export OMP_NUM_THREADS=1

mpirun -np ${SLURM_NTASKS} vasp_std
New Build

We are working to provide a new build of VASP 5.4.4 which uses newer OpenMPI and Intel Compilers. This build can be used on Quest but is available through specifying an additional non-default module location.

submit-vasp-5.4.4.sh
#!/bin/bash
#SBATCH --account=<pXXXXX>  ## YOUR ACCOUNT pXXXX or bXXXX
#SBATCH --partition=<partition>  ### PARTITION (buyin, short, normal, etc)
#SBATCH --job-name=vasp-openmpi-slurm
#SBATCH --ntasks=<ncores>
#SBATCH --time=01:00:00
#SBATCH --mem-per-cpu=2G
#SBATCH --constraint="[quest8|quest9|quest10|quest11]"

module purge all
module load vasp/5.4.4-openmpi-intel
export OMP_NUM_THREADS=1

mpirun -np ${SLURM_NTASKS} vasp_std

How to build your own copy of VASP on Quest

You may find that you or your lab wants to have your own build of VASP. To help with this, we provide a Bash build script which will load the appropriate modules for building VASP and an appropriate makefile.include for building VASP on Quest.

VASP 5.4.4

The following Bash script can be used to build VASP in combination with makefile.include below. This makefile.include will produce the following programs: vasp_gam, vasp_ncl, and vasp_std. Please note that we are not including any build templates for compiling VASP with beef, vtst, etc. If you need help compiling your own copy of VASP on Quest with any of these additional libraries/extensions, you can reach out to us at quest-help@northwestern.edu for support.

install_vasp_5.4.4.sh
module load intel-oneapi-compilers/2021.4.0-gcc
module load intel-oneapi-mkl/2021.4.0-intel
module load mpi/openmpi-4.1.4-intel
make -j 1

Below is the makefile.include for building VASP 5.4.4 on Quest.

makefile.include.vasp.5.4.4
# Precompiler options
CPP_OPTIONS= -DHOST=\"LinuxIFC\"\
             -DMPI -DMPI_BLOCK=8000 \
             -Duse_collective \
             -DscaLAPACK \
             -DCACHE_SIZE=4000 \
             -Davoidalloc \
             -Duse_bse_te \
             -Dtbdyn \
             -Duse_shmem

CPP        = fpp -f_com=no -free -w0  $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS)

FC         = mpifort
FCL        = mpifort -mkl=sequential -lstdc++

FREE       = -free -names lowercase

FFLAGS     = -assume byterecl -w -axCORE-AVX512,CORE-AVX2,AVX
OFLAG      = -O3
OFLAG_IN   = $(OFLAG)
DEBUG      = -O0

MKL_PATH   = $(MKLROOT)/lib/intel64
BLAS       =
LAPACK     =
BLACS      = -lmkl_blacs_openmpi_lp64 
SCALAPACK  = $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS)

OBJECTS    = fftmpiw.o fftmpi_map.o fft3dlib.o fftw3d.o
INCS       =-I$(MKLROOT)/include/fftw
LLIBS      = $(SCALAPACK) $(LAPACK) $(BLAS)

OBJECTS_O1 += fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o

# For what used to be vasp.5.lib
CPP_LIB    = $(CPP)
FC_LIB     = $(FC)
CC_LIB     = icc
CFLAGS_LIB = -O
FFLAGS_LIB = -O1
FREE_LIB   = $(FREE)

OBJECTS_LIB= linpack_double.o getshmem.o

# For the parser library
CXX_PARS   = icpc

LIBS       += parser
LLIBS      += -Lparser -lparser -lstdc++

# # Normally no need to change this
SRCDIR     = ../../src
BINDIR     = ../../bin

VASP 6.2.X

The following Bash script can be used to build VASP in combination with makefile.include below. This makefile.include will produce the following programs: vasp_gam, vasp_ncl, vasp_std, vasp_gpu_ncl, and vasp_gpu. Please note that we are not including any build templates for compiling VASP with beef, vtst, etc. If you need help compiling your own copy of VASP on Quest with any of these additional libraries/extensions, you can reach out to us at quest-help@northwestern.edu for support.

install_vasp_6.2.X.sh
module load intel-oneapi-compilers/2021.4.0-gcc
module load cuda/11.4.0-intel
module load intel-oneapi-mkl/2021.4.0-intel
module load mpi/openmpi-4.1.4-intel
make -j 1

Below is the makefile.include for building VASP 6.2.X on Quest.

makefile.include.vasp.6.2.X
# Precompiler options
CPP_OPTIONS= -DHOST=\"LinuxIFC\"\
             -DMPI -DMPI_BLOCK=8000 -Duse_collective \
             -DscaLAPACK \
             -DCACHE_SIZE=4000 \
             -Davoidalloc \
             -Dvasp6 \
             -Duse_bse_te \
             -Dtbdyn \
             -Dfock_dblbuf

CPP        = fpp -f_com=no -free -w0  $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS)

FC         = mpifort
FCL        = mpifort -mkl=sequential

FREE       = -free -names lowercase

FFLAGS     = -assume byterecl -w -axCORE-AVX512,CORE-AVX2,AVX
OFLAG      = -O2
OFLAG_IN   = $(OFLAG)
DEBUG      = -O0

MKL_PATH   = $(MKLROOT)/lib/intel64
BLAS       =
LAPACK     =
BLACS      = -lmkl_blacs_openmpi_lp64
SCALAPACK  = $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS)

OBJECTS    = fftmpiw.o fftmpi_map.o fft3dlib.o fftw3d.o
INCS       =-I$(MKLROOT)/include/fftw
LLIBS      = $(SCALAPACK) $(LAPACK) $(BLAS)

OBJECTS_O1 += fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o

# For what used to be vasp.5.lib
CPP_LIB    = $(CPP)
FC_LIB     = $(FC)
CC_LIB     = icc
CFLAGS_LIB = -O
FFLAGS_LIB = -O1
FREE_LIB   = $(FREE)

OBJECTS_LIB= linpack_double.o getshmem.o

# For the parser library
CXX_PARS   = icpc
LLIBS      += -lstdc++

# Normally no need to change this
SRCDIR     = ../../src
BINDIR     = ../../bin

#================================================
# GPU Stuff

CPP_GPU    = -DCUDA_GPU -DRPROMU_CPROJ_OVERLAP -DUSE_PINNED_MEMORY -DCUFFT_MIN=28 -UscaLAPACK -Ufock_dblbuf

OBJECTS_GPU= fftmpiw.o fftmpi_map.o fft3dlib.o fftw3d_gpu.o fftmpiw_gpu.o

CC         = icc
CXX        = icpc
CFLAGS     = -fPIC -DADD_ -Wall -qopenmp -DMAGMA_WITH_MKL -DMAGMA_SETAFFINITY -DGPUSHMEM=300 -DHAVE_CUBLAS

# Minimal requirement is CUDA >= 10.X. For "sm_80" you need CUDA >= 11.X.
CUDA_ROOT  ?= $(CUDA_HOME)
NVCC       := $(CUDA_ROOT)/bin/nvcc -ccbin=icc -allow-unsupported-compiler
CUDA_LIB   := -L$(CUDA_ROOT)/lib64 -L$(CUDA_ROOT)/lib64/stubs -lnvToolsExt -lcudart -lcuda -lcufft -lcublas

GENCODE_ARCH    := -gencode=arch=compute_80,code=\"sm_80,compute_80\"

## For all legacy Intel MPI versions (before 2021)
#MPI_INC    = $(I_MPI_ROOT)/intel64/include/

# Or when you are using the Intel oneAPI compiler suite
MPI_INC    = /hpc/software/spack_v17d2/spack/opt/spack/linux-rhel7-x86_64/intel-2021.4.0/openmpi-4.1.4-ztg2tque7onlwhrv7t3kplcbd5ck5lwe/include

VASP CPU 6.4.X

The following Bash script can be used to build VASP with HDF5 enabled in combination with makefile.include below. This makefile.include will produce the following programs: vasp_gam, vasp_ncl, vasp_std. Please note that we are not including any build templates for compiling VASP with beef, vtst, etc. If you need help compiling your own copy of VASP on Quest with any of these additional libraries/extensions, you can reach out to us at quest-help@northwestern.edu for support.

install_vasp_6.4.X.sh
module purge
module load intel-oneapi-compilers/2021.4.0-gcc
module load intel-oneapi-mkl/2021.4.0-intel
module load mpi/openmpi-4.1.4-intel
module load hdf5/1.10.7-openmpi-4.1.4-intel
make -j 1

Below is the makefile.include for building VASP 6.4.X on Quest.

makefile.include.vasp.6.4.X
# Default precompiler options
CPP_OPTIONS = -DHOST=\"LinuxIFC\" \
              -DMPI -DMPI_BLOCK=8000 -Duse_collective \
              -DscaLAPACK \
              -DCACHE_SIZE=4000 \
              -Davoidalloc \
              -Dvasp6 \
              -Duse_bse_te \
              -Dtbdyn \
              -Dfock_dblbuf

CPP         = fpp -f_com=no -free -w0  $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS)

FC          = mpif90
FCL         = mpif90

FREE        = -free -names lowercase

FFLAGS      = -assume byterecl -w

OFLAG       = -O2
OFLAG_IN    = $(OFLAG)
DEBUG       = -O0

OBJECTS     = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o
OBJECTS_O1 += fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o

# For what used to be vasp.5.lib
CPP_LIB     = $(CPP)
FC_LIB      = $(FC)
CC_LIB      = icc
CFLAGS_LIB  = -O
FFLAGS_LIB  = -O1
FREE_LIB    = $(FREE)

OBJECTS_LIB = linpack_double.o

# For the parser library
CXX_PARS    = icpc
LLIBS       = -lstdc++

##
## Customize as of this point! Of course you may change the preceding
## part of this file as well if you like, but it should rarely be
## necessary ...
##

# When compiling on the target machine itself, change this to the
# relevant target when cross-compiling for another architecture
VASP_TARGET_CPU ?= -axCORE-AVX512,CORE-AVX2,AVX
FFLAGS     += $(VASP_TARGET_CPU)
 
# Intel MKL for FFTW, BLAS, LAPACK, and scaLAPACK
# (Note: for Intel Parallel Studio's MKL use -mkl instead of -qmkl)
FCL        += -qmkl=sequential
MKL_PATH   = $(MKLROOT)/lib/intel64
BLACS      = -lmkl_blacs_openmpi_lp64
SCALAPACK  = $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS)
INCS       =-I$(MKLROOT)/include/fftw
LLIBS      += $(SCALAPACK) 

# Use a separate scaLAPACK installation (optional but recommended in combination with OpenMPI)
# Comment out the two lines below if you want to use scaLAPACK from MKL instead
#SCALAPACK_ROOT ?= /path/to/your/scalapack/installation
#LLIBS      += -L${SCALAPACK_ROOT}/lib -lscalapack

# HDF5-support (optional but strongly recommended)
CPP_OPTIONS+= -DVASP_HDF5
HDF5_ROOT  ?= /hpc/software/spack_v17d2/spack/opt/spack/linux-rhel7-x86_64/intel-2021.4.0/hdf5-1.10.7-mvbrxowc73gzgvfjxydytlxc5j6xxixi/
LLIBS      += -L$(HDF5_ROOT)/lib -lhdf5_fortran
INCS       += -I$(HDF5_ROOT)/include

# For the VASP-2-Wannier90 interface (optional)
#CPP_OPTIONS    += -DVASP2WANNIER90
#WANNIER90_ROOT ?= /path/to/your/wannier90/installation
#LLIBS          += -L$(WANNIER90_ROOT)/lib -lwannier

# For the fftlib library (hardly any benefit in combination with MKL's FFTs)
#CPP_OPTION += -Dsysv
#FCL         = mpif90 fftlib.o -qmkl
#CXX_FFTLIB  = icpc -qopenmp -std=c++11 -DFFTLIB_USE_MKL -DFFTLIB_THREADSAFE
#INCS_FFTLIB = -I./include -I$(MKLROOT)/include/fftw
#LIBS       += fftlib

VASP GPU 6.4.X

The following Bash script can be used to build VASP with GPU support enabled in combination with makefile.include below. This makefile.include will produce the following programs: vasp_gam, vasp_ncl, vasp_std. Please note that we are not including any build templates for compiling VASP with beef, vtst, etc. If you need help compiling your own copy of VASP on Quest with any of these additional libraries/extensions, you can reach out to us at quest-help@northwestern.edu for support.

install_vasp_6.4.2.sh
module purge
module load gcc/10.4.0-gcc-4.8.5
module load intel-oneapi-mkl/2021.4.0-gcc-10.4.0
module load nvhpc/23.3-gcc-10.4.0
make -j 1

Below is the makefile.include for building VASP 6.4.X on Quest.

makefile.include.vasp.6.4.2
# Default precompiler options
CPP_OPTIONS = -DHOST=\"LinuxNV\" \
              -DMPI -DMPI_INPLACE -DMPI_BLOCK=8000 -Duse_collective \
              -DscaLAPACK \
              -DCACHE_SIZE=4000 \
              -Davoidalloc \
              -Dvasp6 \
              -Duse_bse_te \
              -Dtbdyn \
              -Dqd_emulate \
              -Dfock_dblbuf \
              -DNVTX \
              -D_OPENMP \
              -D_OPENACC \
              -DUSENCCL -DUSENCCLP2P

CPP         = nvfortran -Mpreprocess -Mfree -Mextend -E $(CPP_OPTIONS) $*$(FUFFIX)  > $*$(SUFFIX)

# N.B.: you might need to change the cuda-version here
#       to one that comes with your NVIDIA-HPC SDK
FC          = mpif90 -acc -gpu=cc80,cuda11.8 -mp
FCL         = mpif90 -acc -gpu=cc80,cuda11.8 -mp -c++libs

FREE        = -Mfree

FFLAGS      = -Mbackslash -Mlarge_arrays

OFLAG       = -fast

DEBUG       = -Mfree -O0 -traceback

OBJECTS     = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o

LLIBS       = -cudalib=cublas,cusolver,cufft,nccl -cuda -L/usr/local/cuda/lib64 -lnvToolsExt

# Redefine the standard list of O1 and O2 objects
SOURCE_O1  := pade_fit.o minimax_dependence.o
SOURCE_O2  := pead.o

# For what used to be vasp.5.lib
CPP_LIB     = $(CPP)
FC_LIB      = nvfortran
CC_LIB      = nvc -w
CFLAGS_LIB  = -O
FFLAGS_LIB  = -O1 -Mfixed
FREE_LIB    = $(FREE)

OBJECTS_LIB = linpack_double.o

# For the parser library
CXX_PARS    = nvc++ --no_warnings

##
## Customize as of this point! Of course you may change the preceding
## part of this file as well if you like, but it should rarely be
## necessary ...
##
# When compiling on the target machine itself , change this to the
# relevant target when cross-compiling for another architecture
VASP_TARGET_CPU ?= -tp host
FFLAGS     += $(VASP_TARGET_CPU)

# Specify your NV HPC-SDK installation (mandatory)
#... first try to set it automatically
NVROOT      =$(shell which nvfortran | awk -F /compilers/bin/nvfortran '{ print $$1 }')

# If the above fails, then NVROOT needs to be set manually
#NVHPC      ?= /opt/nvidia/hpc_sdk
#NVVERSION   = 21.11
#NVROOT      = $(NVHPC)/Linux_x86_64/$(NVVERSION)

## Improves performance when using NV HPC-SDK >=21.11 and CUDA >11.2
OFLAG_IN   = -fast -Mwarperf
SOURCE_IN  := nonlr.o

# Software emulation of quadruple precsion (mandatory)
QD         ?= $(NVROOT)/compilers/extras/qd
LLIBS      += -L$(QD)/lib -lqdmod -lqd
INCS       += -I$(QD)/include/qd

# Intel MKL for FFTW, BLAS, LAPACK, and scaLAPACK
MKLROOT    ?= /hpc/software/spack_v20d1/spack/opt/spack/linux-rhel7-x86_64/gcc-10.4.0/intel-oneapi-mkl-2021.4.0-a67vno2nx6nhhkow3tki7jo6g3pyvjhk/mkl/2021.4.0/
LLIBS_MKL   = -Mmkl -L$(MKLROOT)/lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64
INCS       += -I$(MKLROOT)/include/fftw

# Use a separate scaLAPACK installation (optional but recommended in combination with OpenMPI)
# Comment out the two lines below if you want to use scaLAPACK from MKL instead
#SCALAPACK_ROOT ?= /path/to/your/scalapack/installation
#LLIBS_MKL   = -L$(SCALAPACK_ROOT)/lib -lscalapack -Mmkl

LLIBS      += $(LLIBS_MKL)

# HDF5-support (optional but strongly recommended)
#CPP_OPTIONS+= -DVASP_HDF5
#HDF5_ROOT  ?= /path/to/your/hdf5/installation
#LLIBS      += -L$(HDF5_ROOT)/lib -lhdf5_fortran
#INCS       += -I$(HDF5_ROOT)/include

# For the VASP-2-Wannier90 interface (optional)
#CPP_OPTIONS    += -DVASP2WANNIER90
#WANNIER90_ROOT ?= /path/to/your/wannier90/installation
#LLIBS          += -L$(WANNIER90_ROOT)/lib -lwannier

# For the fftlib library (hardly any benefit for the OpenACC GPU port, especially in combination with MKL's FFTs)
#CPP_OPTIONS+= -Dsysv
#FCL        += fftlib.o
#CXX_FFTLIB  = nvc++ -mp --no_warnings -std=c++11 -DFFTLIB_USE_MKL -DFFTLIB_THREADSAFE
#INCS_FFTLIB = -I./include -I$(MKLROOT)/include/fftw
#LIBS       += fftlib
#LLIBS      += -ldl
Was this helpful?
0 reviews