[Mvapich-discuss] Possible buffer overflow for large messages?

Subramoni, Hari subramoni.1 at osu.edu
Wed Sep 28 14:40:43 EDT 2022


Hi, John.

Sorry to hear that you’re facing issues. Let us try this out internally and get back to you shortly.

Thx,
Hari.

From: Mvapich-discuss <mvapich-discuss-bounces at lists.osu.edu> On Behalf Of John Moore via Mvapich-discuss
Sent: Wednesday, September 28, 2022 1:38 PM
To: mvapich-discuss at lists.osu.edu
Subject: [Mvapich-discuss] Possible buffer overflow for large messages?

Hello, We have a code that does a large Gatherv operation, where the size of the gathered message > 4GB. It is approximately 8GB. We have noticed that the result of the gatherv operation is incorrect for these large calls. The sizes that
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.
    Report Suspicious  <https://us-phishalarm-ewt.proofpoint.com/EWT/v1/KGKeukY!vYQd06mLz2qhalcd1oTuurcPoCYXKF8AiGkExKso7VrPlCGVr1p03W_R57bXXu3uOSPqMR18okUrISpAF2tCRjoFxhMlQJJKPNe88HnHt5zmYv8V0oh_aBFKeZUfbS4b4IYH2w$>   ‌
ZjQcmQRYFpfptBannerEnd
Hello,

We have a code that does a large Gatherv operation, where the size of the gathered message > 4GB. It is approximately 8GB. We have noticed that the result of the gatherv operation is incorrect for these large calls. The sizes that we are passing into Gatherv are all within the int limit, and we are using custom data types (MPI_Type_Contiguous) to allow for this larger message size.

We have also tried replacing the Gatherv call with Isend/Irecv calls, which are all within the int representation range in terms of the number of bytes communicated, with the same incorrect result.

When we compile with OpenMPI, the result is correct. Also, when we run the operations on smaller data sets with MVAPICH2 the result is correct.

This job is being run across two nodes with 16 ranks total (8 ranks each) When we place all the data on a single node, and use the same input data and number of ranks, we again get the correct result. This leads me to believe that some remote send/receive buffer is being exceeded.

We are running MVAPICH2-GDR-2.3.6, but these buffers are all CPU buffers, and we are running this executable with MV2_USE_CUDA=0. Perhaps there are some environmental variables to change here? Any advice would be greatly appreciated.

Thank you,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osu.edu/pipermail/mvapich-discuss/attachments/20220928/36ca6e29/attachment-0014.html>


More information about the Mvapich-discuss mailing list