[mvapich-discuss] How to check MPI traffic going through Infiniband
ports?
Divi Venkateswarlu
divi at ncat.edu
Fri Jun 6 22:30:37 EDT 2008
Hello all:
I have just built a 64-core cluster and the following is my setup.
8 DP quad-core machines running ROCKS-5 and MVAPICH-1.0
I am using 8-port flextronic switch (it is SDR switch) and the cards
are MHES18 (10 GB/sec HCA cards)
I have the following questions.
How do I know if my computation is using IB network or ethernetwork?
I named each IB card "fast1 .... fast8.
I created a host file with 8-copies of each of fast1, fast2.....fast8.
The details of 8 nodes with IB config below (only two shown)..
HOST SUBNET IFACE IP NETMASK NAME
divilab: ibnet ib0 20.1.1.1 255.255.255.0 fast1
................................................................................................................
................................................................................................................
compute-0-6: ibnet ib0 20.1.1.8 255.255.255.0 fast8
I do not see any scale-up from 16 to 32 to 64 processes.
One benchmark of MD simulation (for one picosecond) of a protein (FIXa) is
given below:
The MD code is PMEMD/MVAPICH with IFORT/MKL compilation.
# of CPUs/cores Time (sec) Nodes (load-balanced)
8 82 8
16 49 8
32 42 8
64 39 8
I am suspecting that I might have not set up something right or SDR switch/card limitations...
definitely not happy with poor scale-up...
I used all default values of make.mvapich.gen2 (with intel fortran 9.0).
There seems too many options in this script. Not sure what most of them would do, therefore, just
let the script run as such.
Could somebody offer some help on how to fix/improve the scaling?
Thanks a lot...
Divi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20080606/fd8a350a/attachment.html
More information about the mvapich-discuss
mailing list