[mvapich-discuss] mvapich2 with and without IB

Jonathan Perkins perkinjo at cse.ohio-state.edu
Thu Apr 14 18:58:16 EDT 2011


Noam:
I suggest trying to use Nemesis with both the tcp and ib modules compiled.

$ ./configure --with-device=ch3:nemesis:ib,tcp

Run using Infiniband
$ mpiexec -f hosts ./a.out -n 2

Run using TCP/IP (or Shared Memory)
$ MPICH_NEMESIS_NETMOD=tcp mpiexec -f hosts ./osu_latency -n 2

For more information please look at:
http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.6.html#x1-190004.10

I hope you find this useful.

On Thu, Apr 14, 2011 at 4:49 PM, Noam Bernstein
<noam.bernstein at nrl.navy.mil> wrote:
> I have a cluster with some nodes with Infiniband, and others without.  On the
> nodes without infiniband, I plan to run just on a single node with shared memory
> (they're 32 core machines).  I haven't been able to get this to work on an mvapich2
> installation that's configured with a simple ./configure that doesn't say anything about
> what transport to use, because it finds the IB libraries on the head node, and compiles
> in IB support, which fails on the non-IB nodes.
>
> 1. Is there any way to support this with a single mvapich2 (1.6) build?
> 2. If not, what's the best way to run ./configure when all I care about it on-node
> shared-memory transport?
>
>                                                                                                thanks,
>                                                                                                Noam Bernstein
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
>



-- 
Jonathan Perkins
http://www.cse.ohio-state.edu/~perkinjo



More information about the mvapich-discuss mailing list