[mvapich-discuss] collectives fail under mvapich2-1.0 (fwd)
amith rajith mamidala
mamidala at cse.ohio-state.edu
Mon Oct 1 14:21:12 EDT 2007
Hi Edmund,
We were able to run the 12 process test for collectives on 3 nodes.
Can you provide us some details as to how the processes were launched?
e.g. block or cyclic or any other distrubution.
thanks,
-Amith.
On Thu, 27 Sep 2007, Edmund Sumbar wrote:
> Edmund Sumbar wrote:
> > I'll try running the SKaMPI tests again. Maybe
> > I missed something, as with the mvapich2 tests.
>
> I recompiled and reran SKaMPI pt2pt, coll,
> onesided, and mmisc tests on 3 nodes, 4
> processors per node.
>
> pt2pt and mmisc succeeded, while coll and
> onesided failed (stalled). Any ideas?
>
> For what it's worth, here are the tails of
> the output files...
>
>
> $ tail coll_ib-3x4.sko
> # SKaMPI Version 5.0 rev. 191
>
> begin result "MPI_Bcast-nodes-short"
> nodes= 2 1024 3.8 0.2 39 2.9 3.6
> nodes= 3 1024 6.6 0.4 38 4.0 6.3 4.9
> nodes= 4 1024 9.2 0.2 32 4.6 7.7 7.6 8.6
>
>
> $ tail onesided_ib-3x4.sko
> cpus= 8 4 50051.7 1.3 8 50051.7 --- --- --- --- --- --- ---
> cpus= 9 4 50051.5 0.7 8 50051.5 --- --- --- --- --- --- --- ---
> cpus= 10 4 50047.7 1.6 8 50047.7 --- --- --- --- --- --- ---
> --- ---
> cpus= 11 4 50058.2 2.7 8 50058.2 --- --- --- --- --- --- ---
> --- --- ---
> cpus= 12 4 50074.3 2.8 8 50074.3 --- --- --- --- --- --- ---
> --- --- --- ---
> end result "MPI_Win_wait delayed,small"
> # duration = 9.00 sec
>
> begin result "MPI_Win_wait delayed without MPI_Put"
> cpus= 2 1048576 50025.0 1.4 8 50025.0 ---
>
>
> --
> Ed[mund [Sumbar]]
> AICT Research Support, Univ of Alberta
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
More information about the mvapich-discuss
mailing list