[mvapich-discuss] Affinity problem

Vsevolod Nikonorov v.nikonorov at nikiet.ru
Wed Jan 18 06:04:36 EST 2012


Good afternoon.

I have a problem with process pinning when I execute hybrid parallel tasks (MPI + OpenMP): after starting my task with mpirun on a cluster having something like this:

#pragma omp parallel
{
	printf ("%d\n", omp_get_num_threads ());
}

(mpirun -np 10 -hostfile hostfile a.out) I see that only one thread is started; if I specify the number threads manually like this:

omp_set_num_threads (10);

I see 10 threads, but all of them are started on the same processor core, making performance really low. I suspect there is some default pinning - all threads are pinned to parent core (core which has started the parent process), and automatic discovering of other cores by OpenMP is unable for the same reason.

So my question is: how can I change the default pinning configuratoion? There is of course some information on that in the user guide, which consists of the following recomendations:

1. set VIADEV_USE_AFFINITY environment variable to 0;
2. set MV_USE_AFFINITY variable to 0.

There is a comment also saying that those variables will not take effect while _AFFINITY_ is not set. I tried to find some information about it in the user guide and Google, but all I have found is some citation from some header files (#ifdef _AFFINITY_ in stdio.h).

Could you help me solving this problem?

Thanks in advance!

-- 
Vsevolod Nikonorov <v.nikonorov at nikiet.ru>


More information about the mvapich-discuss mailing list