[mvapich-discuss] Trouble building MVAPICH2-1.0.3 with any compiler other than gcc

Dhabaleswar Panda panda at cse.ohio-state.edu
Thu Jul 3 09:12:05 EDT 2008


Hi David,

You might have noticed that we made a release of MVAPICH2 1.2RC1 yesterday
night. This has multiple start-up schemes including the traditional
MPD-based and also a new scalable mpirun_rsh-based (similar to MVAPICH).
We have verified that the MPD-based startup works with TotalView for all
compilers. The complete TotalView support with the new mpirun_rsh-based
scheme is not there yet. We are working on it and plan to have it in the
final release version.

You can check the MPD-based startup + TotalView for all compilers for this
release and let us know if it works from your view point.

Thanks,

DK

On Tue, 1 Jul 2008, David Gunter wrote:

> I have run into this problem previously with the PGI compilers and was
> once able to work around it; however, it seems to have reared its ugly
> head again and I'm hoping someone on the list knows of a solution.
>
> The problem is that we need to build MVAPICH2 using the Intel,
> PathScale and PGI compilers in addition to the GCC compilers.  Even
> though the documentation states that it has been tested with these
> other compilers I think that such tests were not done with Totalview
> support in mind.
>
> What happens during the build is that src/pm/mpd/mtv_setup.py is
> invoked.  This causes the Python Distutils to try and create a
> Totalview module but Disutils only knows to put in flags for the GCC
> compilers.
>
> I have found switches for PGI and PathScale to ignore "invalid" flags
> but any code compiled with the resulting build does nothing but
> segfault.  I have yet to get Intel to compile the sourcecode.
>
> This leaves us with two options:  Give up on MVAPICH2 in favor of Open-
> MPI, which means having only one MPI implementation on a system where
> we'd prefer to have two, or give up on Totalview support - which is
> not going to fly with our user base.
>
> Does anyone know enough about Distutils to work around this problem?
>
> -david
> --
> David Gunter
> HPC-3: Parallel Tools Team
> Los Alamos National Laboratory
>
>
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>



More information about the mvapich-discuss mailing list