[mvapich-discuss] MVAPICH2-GDR paths configuration

Jonathan Perkins perkinjo at cse.ohio-state.edu
Thu Apr 21 14:10:53 EDT 2016


Hi again.  Thanks for your explanation.

I just double checked our RPMs and I'm not seeing the second -lmpi in the
line that you're referring to.  Can you point me to the link of the RPM
that extracts a wrapper script like this?

On Thu, Apr 21, 2016 at 1:48 PM Davide Vanzo <vanzod at accre.vanderbilt.edu>
wrote:

> Hi Jonathan,
> the problem is that in our specific case, where software is installed in a
> shared filesystem, installing via rpm or yum is not possible since at that
> point the yum and rpm databases on the cluster nodes will be different than
> the one where the package has been installed. So for us it is cleaner to
> open the rpm and move it manually in the shared path. In addition, even
> assuming to use rpm, without the rpm specfile it would be hard for us to
> figure out what the rpm does.
> Anyway, I modified my mpi* scripts as follows:
>
> # Directory locations: Fixed for any MPI implementation.
>
>
> # Set from the directory arguments to configure (e.g.,
> --prefix=/usr/local)
> if [ ! -z ${MV2_PATH+x} ]; then
>     prefix=$MV2_PATH
>     exec_prefix=$MV2_PATH
>     sysconfdir=$MV2_PATH/etc
>     includedir=$MV2_PATH/include
>     libdir=$MV2_PATH/lib64
> else
>     prefix=/opt/mvapich2/gdr/2.2/cuda7.5/gnu
>     exec_prefix=/opt/mvapich2/gdr/2.2/cuda7.5/gnu
>     sysconfdir=/opt/mvapich2/gdr/2.2/cuda7.5/gnu/etc
>     includedir=/opt/mvapich2/gdr/2.2/cuda7.5/gnu/include
>     libdir=/opt/mvapich2/gdr/2.2/cuda7.5/gnu/lib64
> fi
>
> if [ ! -z ${CUDA_HOME+x} ]; then
>     cudadir=$CUDA_HOME
> else
>     cudadir=/usr/local/cuda-7.5
> fi
>
> There is though another problem. I cannot figure out why in the following
> line the "-lmpi" flag appears twice:
>
> $Show $CC -I${cudadir}/include  $PROFILE_INCPATHS -O2 -g -pipe -Wall
> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
> --param=ssp-buffer-size=4 -m64 -mtune=generic   "${allargs[@]}"
> -I$includedir \
> $ITAC_OPTIONS -L$libdir $PROFILE_PRELIB $PROFILE_FOO $rpath_flags -lmpi
> $PROFILE_POSTLIB -lpmi
>
> while on other MVAPICH2 releases it doesn't. In fact, even with the
> correct paths it still fails with "cannot find -lmpi" unless I remove one
> of the two.
>
> Davide
>
> On Thu, 2016-04-21 at 16:44 +0000, Jonathan Perkins wrote:
>
> Hello Davide.  This sounds like a good feature that we can add in a future
> release.
>
> Just in case you were not aware.  I do want to point out that the RPMs are
> relocatable and a post snippet should handle these modifications for you if
> you are able to install using the RPM method.  The request that you brought
> up will work for users who are unable to use `rpm --prefix ...' when
> installing the RPM.
>
> On Thu, Apr 21, 2016 at 11:44 AM Davide Vanzo <vanzod at accre.vanderbilt.edu>
> wrote:
>
> Dear developers,
> when installing MVAPICH2-GDR in a non-default location by cracking open
> the rpm package, the prefix, exec_prefix, sysconfdir, includedir, libdir
> and CUDA paths in the mpi* scripts have to be manually modified in order to
> point to the new install path. Since I could not find any way to override
> them by using environment variables at runtime, it would be nice if the
> hardcoded paths would be replaced by variables sourced from a single
> configuration file. Thoughts?
>
> Davide
>
> --
> Davide Vanzo, PhD
> Application Developer
> Advanced Computing Center for Research and Education (ACCRE)
> Vanderbilt University - Hill Center 201
> www.accre.vanderbilt.edu
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20160421/3527bd09/attachment-0001.html>


More information about the mvapich-discuss mailing list