[mvapich-discuss] Problem with more MPI jobs on the same node

Emir Imamagic eimamagi at srce.hr
Sat Aug 29 09:10:51 EDT 2009


Hello,

we have a problem with running multiple MPI jobs on the same node. We're 
using mvapich 1.1.0 on CentOS 5.3 compiled with Intel 11.1. Nodes are 32 
core Opterons.

We used NPB LU benchmark compiled for 8 processes. With each new job 
started on the node, CPU usage of all processes decreases (we retrieved 
it by top). It seems that individual MPI processes are assigned to the 
same core (as described). This behaves consistently with the increase of 
jobs:
2 jobs - 50% CPU usage (2*app runtime)
3 jobs - 33% CPU usage (3*app runtime)
4 jobs - 25% CPU usage (4*app runtime)

Problem is also described in this thread:
http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/2009-April/002251.html
However, suggested solution does not solve the problem. We set the 
VIADEV_USE_AFFINITY=0. We even changed the source code 
(mpid/ch_gen2/viaparam.h):
#define _AFFINITY_ 0
Nothing helped.

Thanks in advance,
emir



-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 3283 bytes
Desc: S/MIME Cryptographic Signature
Url : http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20090829/71f2d639/smime.bin


More information about the mvapich-discuss mailing list