[mvapich-discuss] Collective benchmark issue

Subramoni, Hari subramoni.1 at osu.edu
Wed Sep 16 08:51:47 EDT 2020


Hi,

Can you please let us know the exact error/issue/behavior you were observing? Were you seeing that 1GB message size was being printed multiple times or was the code failing at that point or hanging?

As per the MPI standard, the “count” argument used by the various MPI APIs (like MPI_Bcast etc) to specify the number of elements of type “datatype” can only accept values which are as large as INT_MAX (2GB-1). So, performing an MPI operation whose count is larger than INT_MAX can lead to unspecified behavior.

Best,
Hari.

From: mvapich-discuss-bounces at cse.ohio-state.edu <mvapich-discuss-bounces at mailman.cse.ohio-state.edu> On Behalf Of Leonardo Picoli
Sent: Tuesday, September 15, 2020 11:40 AM
To: mvapich-discuss at cse.ohio-state.edu <mvapich-discuss at mailman.cse.ohio-state.edu>
Subject: [mvapich-discuss] Collective benchmark issue

Good afternoon,

I was trying to use the collective MPI benchmarks and set the maximum memory limit (using flag -M) to 8GB and maximum message size to the 8GB. However, when i executed it, the max message size was, still, 1GB and does not go beyond that.
Can you help me, please? Do you know what is happening?
Look forward to your return,
Thank you.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20200916/720e0a0b/attachment.html>


More information about the mvapich-discuss mailing list