[mvapich-discuss] OSU benchmarks GPU initialization

sreeram potluri potluri at cse.ohio-state.edu
Sun Aug 25 16:51:07 EDT 2013


Hi Jens,

Thanks for the posting. The reason for the failure is that you are not
using the script "get_local_rank" script (distributed with OSU Micro
Benchmarks) that exports the "LOCAL_RANK" variable. You command should look
like below (given you are in OSU benchamarks directory):

../src/pm/hydra/mpiexec.hydra -np 2 ./get_local_rank ./osu_bw D D

Without the script, LOCAL_RANK returns 0, leading to both processes using
the same GPU. This leads to the failure as your GPUs are in thread
exclusive mode.

Please refer to the "Setting GPU affinity" section of the benchmarks README
available here:
http://mvapich.cse.ohio-state.edu/svn/mpi-benchmarks/branches/4.1/README

One thing to note is that MVAPICH2 2.0a introduces dynamic CUDA
initialization which allows GPU device to be selected even after MPI Init.
But the OSU micro-benchmarks still select the device before MPI_Init for
them to work with earlier version of MVAPICH2 and other MPI libraries.

Best
Sreeram Potluri





On Sun, Aug 25, 2013 at 4:27 PM, Jens Glaser <jglaser at umn.edu> wrote:

> Hi,
>
> I had trouble running the osu_bw test with the new MVAPICH 2.0a.
>
> I got
> ../src/pm/hydra/mpiexec.hydra -np 2 ./osu_bw D D
> Error initializing cuda context
>
> After changing line 360 in osu_bw.c
> to read
>
> if ((str = getenv("MV2_COMM_WORLD_LOCAL_RANK")) != NULL) {
>
> instead of
>
> if ((str = getenv("LOCAL_RANK")) != NULL) {
>
> and after recompiling, it worked.
>
> Hope that will help others, too.
>
> Best
> Jens
>
> P.S.: I was running in thread exclusive mode, if that's of interest.
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20130825/4eb0a612/attachment.html


More information about the mvapich-discuss mailing list