[mvapich-discuss] scaling problem and stray mpd daemon

Vishwas Vasisht vvasisht at locuz.com
Mon Oct 30 00:19:29 EST 2006


Hello,

1. Would you please let us know the detail of your setup. Which version of
mvapich2 you are using? Which flags you have used for your CFLAGS in our
compilation script (Have you changed anything in our default compilation
script for vapi)?

I use MVAPICH2-0.9.3.
The CFLAGS have not been changed.
 CFLAGS="-D${ARCH} -DONE_SIDED -DUSE_INLINE -DRDMA_FAST_PATH \
               -DUSE_HEADER_CACHING -DLAZY_MEM_UNREGISTER -D_SMP_ \
               $SUPPRESS -D${IO_BUS} -D${LINKS} -DMPID_USE_SEQUENCE_NUMBERS \
               -D${VCLUSTER} ${HAVE_MPD_RING} -I${MTHOME}/include $OPT_FLAG"
I have not changed anything in compilation script except these
a. VCLUSTER=_MEDIUM_CLUSTER
b. IO_BUS=_PCI_X_
c. LINKS=_SDR_
d. HAVE_MPD_RING=""
e. MULTI_THREAD=""

2. You will see a lot of mpd threads if there is an active thread. Would you
please run some simple programs, say cpi or pallas, on your system (more
than 32 processes) to make sure the setup is correct. Also, would you
please remove --ncpus and see if you can start your application on more
than 32 processes.

I will update about this..

3. Also, the whole InfiniBand community is moving towards Gen2 (OpenFabric)
stack. May we suggest you upgrade your system to Gen2 stack. MVAPICH2 on
gen2 stack will generally have more features and better performance.

I had a problem with assaining of IP with this. With IBGOLD, the IP will get 
assainged automatically to 11. series, but this did not happen in OpenFabric.

Thanks for the reply

Regards
Vishwas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20061030/9181faac/attachment.html


More information about the mvapich-discuss mailing list