[mvapich-discuss] GPU compute capability issue when compiling MVAPICH2 with CUDA
Davide Vanzo
vanzod at accre.vanderbilt.edu
Wed Mar 9 11:27:08 EST 2016
Hi all,
I'm compiling MVAPICH2 2.1 with CUDA support. Here are the configure
flags:
./configure --with-device=ch3:mrail \
--with-rdma=gen2 \
--with-ib-include=/usr/include/infiniband \
--with-ib-libpath=/usr/lib64 \
--enable-hwloc \
--enable-fortran=yes \
--enable-cxx \
--with-pm=slurm \
--with-slurm=/usr/scheduler/slurm \
--with-pmi=pmi2 \
--enable-cuda \
--with-cuda=/usr/local/cuda/x86_64/7.5 \
--prefix=${PREFIX}
The problem is that during the configuration step the Makefile is
generated with the following nvcc flags:
NVCFLAGS = -cuda -arch sm_13 -maxrregcount 32
despite having GPUs with compute capability 5.2. Since sm_13 is not
supported by nvcc anymore, the build fails with the following error:
nvcc -cuda -arch sm_13 -maxrregcount 32 -
I/usr/local/cuda/x86_64/7.0/include -I/usr/include/infiniband -
I/home/vanzod/Building_zone/MVAPICH/build/src/mpl/include
-I/home/vanzod/Building_zone/MVAPICH/source/src/mpl/include
-I/home/vanzod/Building_zone/MVAPICH/source/src/openpa/src
-I/home/vanzod/Building_zone/MVAPICH/build/src/openpa/src -D_REENTRANT
-I/home/vanzod/Building_zone/MVAPICH/build/src/mpi/romio/include
-I/usr/scheduler/slurm/include -I/usr/include/infiniband --output-file
src/mpid/ch3/channels/mrail/src/cuda/pack_unpack.cpp
../source/src/mpid/ch3/channels/mrail/src/cuda/pack_unpack.cu
nvcc fatal : Value 'sm_13' is not defined for option 'gpu-
architecture'
make[2]: *** [src/mpid/ch3/channels/mrail/src/cuda/pack_unpack.cpp]
Error 1
make[2]: Leaving directory
`/gpfs0/home/vanzod/Building_zone/MVAPICH/build'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory
`/gpfs0/home/vanzod/Building_zone/MVAPICH/build'
make: *** [all] Error 2
What am I missing?
Thank you
Davide
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20160309/ff03803a/attachment.html>
More information about the mvapich-discuss
mailing list