[mvapich-discuss] Broadcast performance with and without IB Multicast, and comparison to Unicast

Krishna Kandalla kandalla at cse.ohio-state.edu
Wed Jul 24 16:08:29 EDT 2013


Hi James,

Thanks for the report. Hardware multicast-based designs tend to do better
with larger number of processes, as you may have already observed in the
graph that you listed. With just 2 nodes, we have observed that the basic
software-based approaches do better. However,  we do not expect to see a
difference of about 2 GB/s between basic send/recv and the 2-node
software-based broadcast. Could you please let us know how you are
computing the bandwidth of the MPI_Bcast operation?

                 - Why when running two instances of the Bcast (no HW
Mcast) , why doesn’t the combined aggregate reach the Unicast maximum (I am
assuming increased latency causes the slower performance, thus 2
independent streams should be able to max out the IB link.  I also assume
the two MPI_COMM_WORLD’s in the two different instances started with
different mpirun_rsh’s don’t know about each other).    It is almost like
something is serialized at the MPI layer, but the aggregate is actual worse
than a single instance test.

>               - Why do 2 streams of the Broadcast (HW mcast) cause the
> aggregate BW to drop to the floor much worse than the non HW mcast test?**
> **
>


Could you please let us know how you are managing two instances of Bcast?
>From your description, we think you are running two different jobs (with
different mpirun_rsh's), concurrently. Are the two jobs  getting launched
on the same set of nodes, or are they on different nodes?

Thanks,
Krishna


** **
>
> Insights would be appreciated.****
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20130724/643d4e73/attachment.html


More information about the mvapich-discuss mailing list