[mvapich-discuss] MVAPICH2-GDR paths configuration

Davide Vanzo vanzod at accre.vanderbilt.edu
Fri Apr 22 12:29:04 EDT 2016


Yes, that's what I meant. However it may not be relevant since the
value of F77FLAGS (which is what is reported by mpichversion) does not
seem to affect the mpif77 wrapper. Can you confirm?
In fact, here is what I get when I set "Show=echo" in the wrapper:
gfortran hello_mpi.f -I/usr/local/mvapich2-
gdr/2.2b/gcc/4.4.7/x86_64/roce/cuda/7.5/include -I/usr/local/mvapich2-
gdr/2.2b/gcc/4.4.7/x86_64/roce/cuda/7.5/include -L/usr/local/mvapich2-
gdr/2.2b/gcc/4.4.7/x86_64/roce/cuda/7.5/lib64 -lmpifort -Wl,-rpath
-Wl,/usr/local/mvapich2-gdr/2.2b/gcc/4.4.7/x86_64/roce/cuda/7.5/lib64
-Wl,--enable-new-dtags -lmpi -L/usr/scheduler/slurm/lib -lpmi
One more thing. Why do you include the PMI library instead of the PMI2
with SLURM?
Davide
On Fri, 2016-04-22 at 15:45 +0000, Jonathan Perkins wrote:
> Hi.  This should point to where the mvapich2-gdr package is
> installed.  I think with your patch (perhaps slightly modified) this
> can also be made to be updated by a runtime variable or conf file as
> well.  Am I understanding the issue correctly?
> 
> On Fri, Apr 22, 2016 at 11:10 AM Davide Vanzo 
> t.edu> wrote:
> > Another one thing that I noticed is that when you have built it,
> > you hard code the include path in the F77 variable. From
> > mpichversion:
> > 
> > MVAPICH2 F77: 	gfortran -O2 -g -pipe -Wall -Wp,-
> > D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-
> > buffer-size=4 -m64 -mtune=generic
> > -I/opt/mvapich2/gdr/2.2/cuda7.5/gnu/lib64/gfortran/modules  -O2
> > 
> > I have not used the F77 compiler but I think that it may create
> > some headaches to users that want to use it and installed it
> > locally.
> > 
> > Davide
> > 
> > 
> > On Thu, 2016-04-21 at 18:10 +0000, Jonathan Perkins wrote:
> > > Hi again.  Thanks for your explanation.
> > > 
> > > I just double checked our RPMs and I'm not seeing the second
> > > -lmpi in the line that you're referring to.  Can you point me to
> > > the link of the RPM that extracts a wrapper script like this?
> > > 
> > > On Thu, Apr 21, 2016 at 1:48 PM Davide Vanzo 
> > > bilt.edu> wrote:
> > > > Hi Jonathan,
> > > > the problem is that in our specific case, where software is
> > > > installed in a shared filesystem, installing via rpm or yum is
> > > > not possible since at that point the yum and rpm databases on
> > > > the cluster nodes will be different than the one where the
> > > > package has been installed. So for us it is cleaner to open the
> > > > rpm and move it manually in the shared path. In addition, even
> > > > assuming to use rpm, without the rpm specfile it would be hard
> > > > for us to figure out what the rpm does.
> > > > Anyway, I modified my mpi* scripts as follows:
> > > > 
> > > > # Directory locations: Fixed for any MPI implementation.      
> > > >                                                                
> > > >                                                                
> > > >                     
> > > > # Set from the directory arguments to configure (e.g., --
> > > > prefix=/usr/local)                                             
> > > >                                                                
> > > >                             
> > > > if [ ! -z ${MV2_PATH+x} ]; then
> > > >     prefix=$MV2_PATH
> > > >     exec_prefix=$MV2_PATH
> > > >     sysconfdir=$MV2_PATH/etc
> > > >     includedir=$MV2_PATH/include
> > > >     libdir=$MV2_PATH/lib64
> > > > else
> > > >     prefix=/opt/mvapich2/gdr/2.2/cuda7.5/gnu
> > > >     exec_prefix=/opt/mvapich2/gdr/2.2/cuda7.5/gnu
> > > >     sysconfdir=/opt/mvapich2/gdr/2.2/cuda7.5/gnu/etc
> > > >     includedir=/opt/mvapich2/gdr/2.2/cuda7.5/gnu/include
> > > >     libdir=/opt/mvapich2/gdr/2.2/cuda7.5/gnu/lib64
> > > > fi
> > > > 
> > > > if [ ! -z ${CUDA_HOME+x} ]; then
> > > >     cudadir=$CUDA_HOME
> > > > else
> > > >     cudadir=/usr/local/cuda-7.5
> > > > fi
> > > > 
> > > > There is though another problem. I cannot figure out why in the
> > > > following line the "-lmpi" flag appears twice:
> > > > 
> > > > $Show $CC -I${cudadir}/include  $PROFILE_INCPATHS -O2 -g -pipe
> > > > -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --
> > > > param=ssp-buffer-size=4 -m64 -mtune=generic   "${allargs[@]}"
> > > > -I$includedir \
> > > > $ITAC_OPTIONS -L$libdir $PROFILE_PRELIB $PROFILE_FOO
> > > > $rpath_flags -lmpi $PROFILE_POSTLIB -lpmi
> > > > 
> > > > while on other MVAPICH2 releases it doesn't. In fact, even with
> > > > the correct paths it still fails with "cannot find -lmpi"
> > > > unless I remove one of the two.
> > > > 
> > > > Davide
> > > > 
> > > > On Thu, 2016-04-21 at 16:44 +0000, Jonathan Perkins wrote:
> > > > > Hello Davide.  This sounds like a good feature that we can
> > > > > add in a future release.
> > > > > 
> > > > > Just in case you were not aware.  I do want to point out that
> > > > > the RPMs are relocatable and a post snippet should handle
> > > > > these modifications for you if you are able to install using
> > > > > the RPM method.  The request that you brought up will work
> > > > > for users who are unable to use `rpm --prefix ...' when
> > > > > installing the RPM.
> > > > > 
> > > > > On Thu, Apr 21, 2016 at 11:44 AM Davide Vanzo 
> > > > > anderbilt.edu> wrote:
> > > > > > Dear developers,
> > > > > > when installing MVAPICH2-GDR in a non-default location by
> > > > > > cracking open the rpm package, the prefix, exec_prefix,
> > > > > > sysconfdir, includedir, libdir and CUDA paths in the mpi*
> > > > > > scripts have to be manually modified in order to point to
> > > > > > the new install path. Since I could not find any way to
> > > > > > override them by using environment variables at runtime, it
> > > > > > would be nice if the hardcoded paths would be replaced by
> > > > > > variables sourced from a single configuration file.
> > > > > > Thoughts?
> > > > > > 
> > > > > > Davide
> > > > > > 
> > > > > > -- 
> > > > > > Davide Vanzo, PhD
> > > > > > Application Developer
> > > > > > Advanced Computing Center for Research and Education
> > > > > > (ACCRE)
> > > > > > Vanderbilt University - Hill Center 201
> > > > > > www.accre.vanderbilt.edu
> > > > > > _______________________________________________
> > > > > > mvapich-discuss mailing list
> > > > > > mvapich-discuss at cse.ohio-state.edu
> > > > > > http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-
> > > > > > discuss
> > > > > > 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20160422/0557f42b/attachment-0001.html>


More information about the mvapich-discuss mailing list