Requesting Access to Use VASP on Quest
Because VASP is a licensed software, we need to give users explicit access to the VASP modules on Quest. VASP provides Quest admins with access to a portal where we can use the e-mail associated with your VASP access/license to verify that you can use VASP on Quest. Please send an e-mail to quest-help@northwestern.edu stating both that you would like to use VASP on Quest and the e-mail address associated with your VASP access/license.
Please note that depending on your license, you may have access to either the VASP 5.X modules or the VASP 6.X modules or both.
In addition, even after you are granted access to VASP, you will continue to see the following warning message when you try to use one of the VASP modules.
$ module load vasp/5.4.4-openmpi-4.0.5-intel-19.0.5.281
WARNING: VASP license is restricted. Contact quest-help for assistance
This message can be ignored once you have received confirmation from us concerning your access.
Please note that only the builds of VASP 6.X have CUDA support enabled.
Running VASP on Quest
Once you have received confirmation that you have been given access to use the VASP modules on Quest, you can start to run VASP simulations. Please find examples of running VASP 5.4.4 on Quest below. We provide two examples. The first is for the VASP 5.4.4 install that is available through the default set of modules. The second is for a new build of VASP (against newer OpenMPI and Intel Compilers) that is available through specifying an additional non-default module location.
Please note that this submission script is designed so that however many cores are requested from the scheduler via
#SBATCH --ntasks=<ncores>
will automatically translate to mpirun -np
through the the environmental variable ${SLURM_NTASKS}
.
Current Build
#!/bin/bash
#SBATCH --account=<pXXXXX> ## YOUR ACCOUNT pXXXX or bXXXX
#SBATCH --partition=<partition> ### PARTITION (buyin, short, normal, etc)
#SBATCH --job-name=vasp-openmpi-slurm
#SBATCH --ntasks=<ncores>
#SBATCH --time=01:00:00
#SBATCH --mem-per-cpu=2G
#SBATCH --constraint="[quest8|quest9|quest10|quest11]"
module purge all
module load vasp/5.4.4-openmpi-4.0.5-intel-19.0.5.281
export OMP_NUM_THREADS=1
mpirun -np ${SLURM_NTASKS} vasp_std
New Build
We are working to provide a new build of VASP 5.4.4 which uses newer OpenMPI and Intel Compilers. This build can be used on Quest but is available through specifying an additional non-default module location.
#!/bin/bash
#SBATCH --account=<pXXXXX> ## YOUR ACCOUNT pXXXX or bXXXX
#SBATCH --partition=<partition> ### PARTITION (buyin, short, normal, etc)
#SBATCH --job-name=vasp-openmpi-slurm
#SBATCH --ntasks=<ncores>
#SBATCH --time=01:00:00
#SBATCH --mem-per-cpu=2G
#SBATCH --constraint="[quest8|quest9|quest10|quest11]"
module purge all
module use /hpc/software/spack_v17d2/spack/share/spack/modules/linux-rhel7-x86_64/
module load vasp/5.4.4-openmpi-intel
export OMP_NUM_THREADS=1
mpirun -np ${SLURM_NTASKS} vasp_std
How to build your own copy of VASP on Quest
You may find that you or your lab wants to have your own build of VASP. To help with this, we provide a Bash build script which will load the appropriate modules for building VASP and an appropriate makefile.include
for building VASP on Quest.
VASP 5.4.4
The following Bash script can be used to build VASP in combination with makefile.include
below. This makefile.include
will produce the following programs: vasp_gam
, vasp_ncl
, and vasp_std
. Please note that we are not including any build templates for compiling VASP with beef, vtst, etc. If you need help compiling your own copy of VASP on Quest with any of these additional libraries/extensions, you can reach out to us at quest-help@northwestern.edu for support.
module use /hpc/software/spack_v17d2/spack/share/spack/modules/linux-rhel7-x86_64/
module load intel-oneapi-compilers/2021.4.0-gcc
module load intel-oneapi-mkl/2021.4.0-intel
module load mpi/openmpi-4.1.4-intel
make -j 1
Below is the makefile.include
for building VASP 5.4.4 on Quest.
# Precompiler options
CPP_OPTIONS= -DHOST=\"LinuxIFC\"\
-DMPI -DMPI_BLOCK=8000 \
-Duse_collective \
-DscaLAPACK \
-DCACHE_SIZE=4000 \
-Davoidalloc \
-Duse_bse_te \
-Dtbdyn \
-Duse_shmem
CPP = fpp -f_com=no -free -w0 $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS)
FC = mpifort
FCL = mpifort -mkl=sequential -lstdc++
FREE = -free -names lowercase
FFLAGS = -assume byterecl -w -axCORE-AVX512,CORE-AVX2,AVX
OFLAG = -O3
OFLAG_IN = $(OFLAG)
DEBUG = -O0
MKL_PATH = $(MKLROOT)/lib/intel64
BLAS =
LAPACK =
BLACS = -lmkl_blacs_openmpi_lp64
SCALAPACK = $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS)
OBJECTS = fftmpiw.o fftmpi_map.o fft3dlib.o fftw3d.o
INCS =-I$(MKLROOT)/include/fftw
LLIBS = $(SCALAPACK) $(LAPACK) $(BLAS)
OBJECTS_O1 += fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o
# For what used to be vasp.5.lib
CPP_LIB = $(CPP)
FC_LIB = $(FC)
CC_LIB = icc
CFLAGS_LIB = -O
FFLAGS_LIB = -O1
FREE_LIB = $(FREE)
OBJECTS_LIB= linpack_double.o getshmem.o
# For the parser library
CXX_PARS = icpc
LIBS += parser
LLIBS += -Lparser -lparser -lstdc++
# # Normally no need to change this
SRCDIR = ../../src
BINDIR = ../../bin
VASP 6.2.X
The following Bash script can be used to build VASP in combination with makefile.include
below. This makefile.include
will produce the following programs: vasp_gam
, vasp_ncl
, vasp_std
, vasp_gpu_ncl
, and vasp_gpu
. Please note that we are not including any build templates for compiling VASP with beef, vtst, etc. If you need help compiling your own copy of VASP on Quest with any of these additional libraries/extensions, you can reach out to us at quest-help@northwestern.edu for support.
module use /hpc/software/spack_v17d2/spack/share/spack/modules/linux-rhel7-x86_64/
module load intel-oneapi-compilers/2021.4.0-gcc
module load cuda/11.4.0-intel
module load intel-oneapi-mkl/2021.4.0-intel
module load mpi/openmpi-4.1.4-intel
make -j 1
Below is the makefile.include
for building VASP 6.2.X on Quest.
# Precompiler options
CPP_OPTIONS= -DHOST=\"LinuxIFC\"\
-DMPI -DMPI_BLOCK=8000 -Duse_collective \
-DscaLAPACK \
-DCACHE_SIZE=4000 \
-Davoidalloc \
-Dvasp6 \
-Duse_bse_te \
-Dtbdyn \
-Dfock_dblbuf
CPP = fpp -f_com=no -free -w0 $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS)
FC = mpifort
FCL = mpifort -mkl=sequential
FREE = -free -names lowercase
FFLAGS = -assume byterecl -w -axCORE-AVX512,CORE-AVX2,AVX
OFLAG = -O2
OFLAG_IN = $(OFLAG)
DEBUG = -O0
MKL_PATH = $(MKLROOT)/lib/intel64
BLAS =
LAPACK =
BLACS = -lmkl_blacs_openmpi_lp64
SCALAPACK = $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS)
OBJECTS = fftmpiw.o fftmpi_map.o fft3dlib.o fftw3d.o
INCS =-I$(MKLROOT)/include/fftw
LLIBS = $(SCALAPACK) $(LAPACK) $(BLAS)
OBJECTS_O1 += fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o
# For what used to be vasp.5.lib
CPP_LIB = $(CPP)
FC_LIB = $(FC)
CC_LIB = icc
CFLAGS_LIB = -O
FFLAGS_LIB = -O1
FREE_LIB = $(FREE)
OBJECTS_LIB= linpack_double.o getshmem.o
# For the parser library
CXX_PARS = icpc
LLIBS += -lstdc++
# Normally no need to change this
SRCDIR = ../../src
BINDIR = ../../bin
#================================================
# GPU Stuff
CPP_GPU = -DCUDA_GPU -DRPROMU_CPROJ_OVERLAP -DUSE_PINNED_MEMORY -DCUFFT_MIN=28 -UscaLAPACK -Ufock_dblbuf
OBJECTS_GPU= fftmpiw.o fftmpi_map.o fft3dlib.o fftw3d_gpu.o fftmpiw_gpu.o
CC = icc
CXX = icpc
CFLAGS = -fPIC -DADD_ -Wall -qopenmp -DMAGMA_WITH_MKL -DMAGMA_SETAFFINITY -DGPUSHMEM=300 -DHAVE_CUBLAS
# Minimal requirement is CUDA >= 10.X. For "sm_80" you need CUDA >= 11.X.
CUDA_ROOT ?= $(CUDA_HOME)
NVCC := $(CUDA_ROOT)/bin/nvcc -ccbin=icc -allow-unsupported-compiler
CUDA_LIB := -L$(CUDA_ROOT)/lib64 -L$(CUDA_ROOT)/lib64/stubs -lnvToolsExt -lcudart -lcuda -lcufft -lcublas
GENCODE_ARCH := -gencode=arch=compute_80,code=\"sm_80,compute_80\"
## For all legacy Intel MPI versions (before 2021)
#MPI_INC = $(I_MPI_ROOT)/intel64/include/
# Or when you are using the Intel oneAPI compiler suite
MPI_INC = /hpc/software/spack_v17d2/spack/opt/spack/linux-rhel7-x86_64/intel-2021.4.0/openmpi-4.1.4-ztg2tque7onlwhrv7t3kplcbd5ck5lwe/include