[mvapich-discuss] MVAPICH2 with OpenMP

Sayantan Sur surs at cse.ohio-state.edu
Wed Mar 2 15:11:26 EST 2011


Hi Martin,

Glad to know that the problem is resolved. We will add this in the userguide.

Ben - thanks for your help here.

Thanks.

On Wed, Mar 2, 2011 at 2:41 PM, Martin Cuma <martin.cuma at utah.edu> wrote:
> Thanks Ben,
>
> this did the trick. I was thinking that since OpenMPI did not seem to need
> this, MVAPICH2 would not need it too.
>
> Sayantan, perhaps it would be good to stress this out in your otherwise very
> good user's guide? There's not much specifics about the Qlogic PSM, which is
> quite different from the Mellanox, at least from an user perspective.
>
> Also, your OpenMP-MPI test code you sent works right even without the
> IPATH_NO_CPUAFFINITY - that is, it shows multiple threads per process,
> however, critical to me was to see 400% CPU load per MPI process in my
> program (ran as 1 process per socket on a quad-core CPU), which I only
> achieved with the IPATH_NO_CPUAFFINITY=1.
>
> Thanks for all your help.
>
> MC
>
> On Wed, 2 Mar 2011, Ben Truscott wrote:
>
>> Dear Martin
>>
>> I'm working with a configuration very much like yours. I wonder, did you
>> remember to set IPATH_NO_CPUAFFINITY=1? Without this, the PSM library makes
>> its own affinity settings that pre-empt those made by OpenMP.
>>
>> Regards
>>
>> Ben Truscott
>> School of Chemistry
>> University of Bristol
>>
>>>  Hi all,
>>>
>>>  I am having a strange problem with MVAPICH2 - the ones that I build
>>> don't
>>>  seem to run multi-threaded programs with OpenMP. I suspect it's some
>>> kind
>>>  of a configuration issue on my end as I tried to run MVAPICH2 from the
>>>  stock OFED distribution and the multithreading there is fine.
>>>
>>>  Also, I have been building OpenMPI with fairly standard options and the
>>>  OpenMP there works fine.
>>>
>>>  Does someone have similar experience, or, is there some trick with
>>>  MVAPICH2 build or running to get multiple OpenMP threads going?
>>>
>>>  I configure as:
>>>  configure --enable-romio --with-file-system=nfs+ufs
>>>  --with-device=ch3:psm
>>>  with Intel or GNU compilers.
>>>
>>>  Then run as:
>>>  mpiexec.hydra -genv OMP_NUM_THREADS 4 -genv MV2_ENABLE_AFFINITY 0 -n 16
>>>  -machinefile nodefile ./program
>>>
>>>  This is all on RHEL 5.5 with the Qlogic or Mellanox IB adapters.
>>>
>>>  Thanks,
>>>  MC
>>>
>>>  --
>>>  Martin Cuma
>>>  Center for High Performance Computing
>>>  University of Utah
>>
>>
>
> --
> Martin Cuma
> Center for High Performance Computing
> University of Utah
>
>



-- 
Sayantan Sur

Research Scientist
Department of Computer Science
http://www.cse.ohio-state.edu/~surs



More information about the mvapich-discuss mailing list