[mvapich-discuss] Enable IBcast with HW multicast

Dovis Alessandro adovis at student.ethz.ch
Mon Jun 8 12:32:38 EDT 2015


Quick update: I have fixed the issue by:
- upgrading libibmad a the newer version;
- use Bcast instead of IBcast.

A few questions:
- will IBcast be able to use HW multicast in the future?
- the bandwidth of HW-multicast-based Bcast is much lower than the one with SW-based Bcast, for 3-4 nodes. The way I am measuring the bandwidth is with a variation of osu_bw that uses Bcast instead of Isend/Irecv. Is this behaviour expected? Why?
- why does the bandwidth decrease when adding more receiving nodes, also in the HW-based solution?

Thanks
________________________________________
From: Dovis  Alessandro
Sent: Monday, June 08, 2015 1:43 PM
To: mvapich-discuss at cse.ohio-state.edu
Subject: Enable IBcast with HW multicast

Hello,

we are using a Mellanox switch with OpenSM3.3.13 installed (on the switch); the machines have libibmad-dev, permissions on umad0 look like:
crw-rw-rw- 1 root root 231, 0 Jun  3 16:57 /dev/infiniband/umad0 ,
and the configuration command (for mvapich2-2.1) is:
./configure --prefix=/opt/mvapich2-2.1/ --disable-fortran --disable-fc --disable-f77 --enable-threads=multiple

If I run a test like:
/opt/mvapich2-2.1/bin/mpiexec --env MV2_USE_MCAST=1 --env MV2_MCAST_NUM_NODES_THRESHOLD=0 --host ... -n 4 ./my_test
I would expect to see bandwidth and latency numbers change with respect to running with MV2_USE_MCAST=0.
Instead, latency and bandwidth remain the same, which makes me guess that HW multicast is *not* being used by MPI.

Is there a way I can check it? Do you have hints about where the problem can be?
Thank you very much.

Best regards,
Alessandro Dovis



More information about the mvapich-discuss mailing list