[mvapich-discuss] Affinity problem

Dhabaleswar Panda panda at cse.ohio-state.edu
Wed Jan 18 10:29:25 EST 2012


>From your message, it looks like you are using MVAPICH1.

For MVAPICH1,

  - if you are using the Gen2 interface, you should use
    VIADEV_USE_AFFINITY=0.

  - If you are using the Gen2-Hybrid interface, you should use
    MV_USE_AFFINITY=0.

You do not need to change anything else.

Let us know if this solves your problem.

Thanks,

DK

On Wed, 18 Jan 2012, Vsevolod Nikonorov wrote:

> Good afternoon.
>
> I have a problem with process pinning when I execute hybrid parallel tasks (MPI + OpenMP): after starting my task with mpirun on a cluster having something like this:
>
> #pragma omp parallel
> {
> 	printf ("%d\n", omp_get_num_threads ());
> }
>
> (mpirun -np 10 -hostfile hostfile a.out) I see that only one thread is started; if I specify the number threads manually like this:
>
> omp_set_num_threads (10);
>
> I see 10 threads, but all of them are started on the same processor core, making performance really low. I suspect there is some default pinning - all threads are pinned to parent core (core which has started the parent process), and automatic discovering of other cores by OpenMP is unable for the same reason.
>
> So my question is: how can I change the default pinning configuratoion? There is of course some information on that in the user guide, which consists of the following recomendations:
>
> 1. set VIADEV_USE_AFFINITY environment variable to 0;
> 2. set MV_USE_AFFINITY variable to 0.
>
> There is a comment also saying that those variables will not take effect while _AFFINITY_ is not set. I tried to find some information about it in the user guide and Google, but all I have found is some citation from some header files (#ifdef _AFFINITY_ in stdio.h).
>
> Could you help me solving this problem?
>
> Thanks in advance!
>
> --
> Vsevolod Nikonorov <v.nikonorov at nikiet.ru>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>



More information about the mvapich-discuss mailing list