[mvapich-discuss] blocking vs polling with PSM

Hari Subramoni subramoni.1 at osu.edu
Tue Aug 5 15:52:39 EDT 2014


Hello David,

Thank you for the information. Please let us know if the variable solves
your problems.

We don't have anything other than setting MV2_USE_BLOCKING=1 to control
polling for the PSM channel.

Please let us know if you face any other issues in using MVAPICH2.

Regards,
Hari.


On Tue, Aug 5, 2014 at 3:46 PM, David Winslow <
david.winslow at serendipitynow.com> wrote:

> Hari
>
> We are running 328 processes on 14 servers with 512 total cores. We'll try
> the parameter you provided. Is there anything specifically designed to not
> use polling such as MV2_USE_BLOCKING=1?
>
> I'll let you know the results using MV2_ON_DEMAND_THRESHOLD=1.
>
> Thanks for your assistance,
> David
>
>
>
>
>
> On Tue, Aug 5, 2014 at 2:11 PM, Hari Subramoni <subramoni.1 at osu.edu>
> wrote:
>
>> Hi David,
>>
>> Could you please let us know how many processes you are running it with?
>> Can you try running with MV2_ON_DEMAND_THRESHOLD=1 and retry?
>>
>> Regards,
>> Hari.
>>
>>
>> On Tue, Aug 5, 2014 at 10:57 AM, David Winslow <
>> david.winslow at serendipitynow.com> wrote:
>>
>>> We have a multi-process, multi-threaded MPI application using MVAPICH
>>> 2.0 rc2 over PSM. Based on my reading of the documentation, using the
>>> blocking mode of communications rather than polling would improve our
>>> performance in our particular application. Using -genv MV2_USE_BLOCKING=1;
>>> however, doesn't seem to work.
>>>
>>> Is blocking supported for PSM? If not, is there an alternative to reduce
>>> or eliminate polling?
>>>
>>> Thanks
>>> David
>>>
>>> _______________________________________________
>>> mvapich-discuss mailing list
>>> mvapich-discuss at cse.ohio-state.edu
>>> http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20140805/e0971de9/attachment-0001.html>


More information about the mvapich-discuss mailing list