[mvapich-discuss] performance problems on mpi/openmp hybrid code

Susan A. Schwarz Susan.A.Schwarz at dartmouth.edu
Tue Mar 10 12:03:40 EDT 2009


I am running an MPI/OpenMP code using mvapich on dual quad-core AMD nodes on a
RHEL 5.3 cluster.  Initially I found that the code took longer to run using the
infiniband than when I ran it with just ethernet connections. I found the
section in the MVAPICH User and Tuning Guide about setting VIADEV_USE_AFFINITY=0
to allow the openmp threads to run on other CPUs. Now when I set
VIADEV_USE_AFFINITY=0, I find that now the openmp section is using other CPUs
but because the load on the other CPUs is about 50%, my code is still not
running as fast as the version that uses the ethernet. Here is the structure of
the fortran code which I am compiling with Intel v11.0 compilers:

do i= 1 to # of iterations

    [ perform mpi-based calculation]
    if master processor
       perform openmp-based calculation using 8 threads
       mpi_bcast(broadcast results to the other processes
    else if not the master
       mpi_bcast(obtain results from master)
    end if
end do

So the slave processors do an mpi_bcast and wait for the master process to
complete the openmp-based calculation and broadcast the result. When I run 
'top', I see that the slave processes are using 50% of each of the CPUs while 
waiting for the master process to complete the openmp section of the code. 
During the OpenMP section of the code, top shows the master processor running 
with a load of atmost 400%.

During the ethernet-based run, the load on the slave processes is almost 0 and
the master processor has a load of 800% during the openmp section of the code 
which is what I expected because I am using 8 threads. When I compare the 
elapsed times for the openmp section of the code, the infiniband version takes 
twice as long as the ethernet version.

My question is why is the load on the slave processors 50% when I am using the
infiniband when they are doing nothing except waiting for the results to be
broadcast to them and why is my openmp running only at 400% and not 800% . Is 
there any way to either change my code or the  configuration of mvapich so this 
doesn't happen.

thank you,
Susan Schwarz
Research Computing
Dartmouth College








More information about the mvapich-discuss mailing list