[mvapich-discuss] MVAPICH2 HPL performance issue

Panda, Dhabaleswar panda at cse.ohio-state.edu
Wed Apr 2 05:31:28 EDT 2014


I believe you are using MPI+OpenMP version of HPL. You need to disable affinity.

Please take a look at the following sections (6.16, 6.17) and FAQ entry 9.1.4 of
MVAPICH2 2.0rc1 user guide:

http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-2.0rc1.html#x1-780006.16

http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-2.0rc1.html#x1-1130009.1.4


Let us know if this resolves the issue or not.

Thanks,

DK


________________________________
From: mvapich-discuss [mvapich-discuss-bounces at cse.ohio-state.edu] on behalf of Mikhail Posypkin [mposypkin at gmail.com]
Sent: Wednesday, April 02, 2014 3:42 AM
To: mvapich-discuss at cse.ohio-state.edu
Subject: [mvapich-discuss] MVAPICH2 HPL performance issue

Dear colleagues,

Mvapich2 1.9 and mvapich2 2.0rc demonstrate very poor performance in HPL Linpack test
from netlib.org<http://netlib.org>. The running time of the HPL test compiled with mvapich is at least 10 times greater then for the same test compiled with OpenMPI. I tried 16 and 32 processes on two 16 core servers. Suprsingly even if I run 'xhpl' executable from the command line on the cluster front-end server (1 MPI process) the difference in performance is the same (approximately 10 times). I assume we build it with wrong flags or make some other installation error. Could you please help us to resolve this issue?

All the best,
Mikhail
-------------- next part --------------
A non-text attachment was scrubbed...
Name: winmail.dat
Type: application/ms-tnef
Size: 5213 bytes
Desc: not available
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20140402/bdf9525a/attachment.bin>


More information about the mvapich-discuss mailing list