[mvapich-discuss] MPI blocking mode progress with 10GbE/ iWARP

Sundeep Narravula narravul at cse.ohio-state.edu
Thu Oct 18 21:44:40 EDT 2007


Ken,
  You should be able to use the env variable MV2_SPIN_COUNT. This is the
count after which the MPI process goes into a blocking mode. You can use a
small value for this and that should show you reduced CPU utilization. I
am able to see a marked decrease in CPU with a small MV2_SPIN_COUNT
specified.

Regards.
  --Sundeep.


On Thu, 18 Oct 2007, Ken Cain wrote:

>
>
>
> Sundeep Narravula wrote:
>
> >>I would like to run mvapich2 in blocking mode (i.e., do not take up any
> >>CPU while waiting for incoming messages).  I would like this even when I
> >>am running one MPI process per node. What are the prerequisites to
> >>achieve this using the current Gen2-iWARP transport? Should I simply use
> >>the MV2_USE_BLOCKING parameter (knowing that it is documented to only
> >>work for the Gen2-InfiniBand transport)? Or do I need to take into
> >>account additional parameters such as the threading support parameter to
> >>the mvapich2 library build?
> >>
> >>
> >>
> >Hi Ken,
> >
> >  Currently if you run one MPI process per node you *can* use
> >MV2_USE_BLOCKING mode with MV2_USE_IWARP_MODE with the Gen2-iWARP device.
> >You should not need to worry about other issues for this case. The
> >following execution is a perfectly valid one.
> >
> >mpiexec -n 2 -env MV2_USE_IWARP_MODE 1 -env MV2_USE_BLOCKING 1 ./a.out
> >
> >Please note that as of now this works only for one MPI process per node.
> >
> >Hope that helps.
> >
> >Regards,
> >  --Sundeep.
> >
> >
> >
> >
>
> Hello Sundeep,
>
> Sorry it's been a while since your most recent response, but I have just
> now had a chance to perform some experiments.
>
> With MVAPICH2-1.0 I have found that I can control polling or blocking
> mode progress with MV2_USE_BLOCKING, but only for trivial MPI programs
> (e.g., rank 0 sleeps, then sends to rank 1 whose CPU utilization I
> monitor while inside MPI_Recv or MPI_Wait). When there is more
> communication activity (e.g., in an MPI ping-pong benchmark) I can no
> longer control the behavior and I typically see 100% CPU utilization,
> even when the benchmark is sending large (e.g., 1, 2, 4, 8 MB) messages.
> I have seen this behavior with the Gen2 InfiniBand device.
>
> In MVAPICH-0.9.9 blocking mode progress (VIADEV_USE_BLOCKING) is
> documented to be conditional, yielding CPU only if there are no more
> incoming messages. I have refined this behavior by setting
> VIADEV_MAX_SPIN_COUNT (=1) to achieve my goal of blocking mode progress
> under all circumstances.
>
> Is there a similar spin count mechanism availabe in MVAPICH2-1.0 that I
> would need for iWARP? I cannot find one in the user guide. There is an
> MV2_SPIN_COUNT variable in the source code but I have not been
> successful while trying to use it. The reference is in
> src/mpid/osu_ch3/channels/mrail/src/gen2/ibv_param.c.
>
> Thank you,
>
> -Ken
>




More information about the mvapich-discuss mailing list