[mvapich-discuss] MPI_Send over 2 GB fails

Vittorio vitto.giova at yahoo.it
Sun Feb 22 11:33:41 EST 2009


hello!
i'm performing some performance test of mpvapich2 on infiniband:
the test is very simple sending fixed quantities of data from one node to
another.
from 1 kB to 2 GB there are no problems but as soon as i try to transfer 4GB
and above i get

Fatal error in MPI_Send: Internal MPI error!, error stack:
MPI_Send(192): MPI_Send(buf=0x6020a0, count=536870912, MPI_UNSIGNED_LONG,
dest=1, tag=1, MPI_COMM_WORLD) failed
(unknown)(): Internal MPI error![cli_0]: aborting job:
Fatal error in MPI_Send: Internal MPI error!, error stack:
MPI_Send(192): MPI_Send(buf=0x6020a0, count=536870912, MPI_UNSIGNED_LONG,
dest=1, tag=1, MPI_COMM_WORLD) failed
(unknown)(): Internal MPI error!
rank 0 in job 11  randori_45329   caused collective abort of all ranks
  exit status of rank 0: return code 1

the two machines are equal with a 64bit OS and equipped with 32 GB of ram.
i also tried the program on a single machine, but i receive the same error
just after the 2 GB transfer.

i'm pretty sure MPI can send more than 4 GB of data so i just can't figure
out what the problem might be.
any help is really appreciated
thanks a lot
Vittorio
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20090222/1b023a07/attachment.html


More information about the mvapich-discuss mailing list