[mvapich-discuss] Is there a way to make ch3:psm work on SLURM-based systems?

Jonathan Perkins perkinjo at cse.ohio-state.edu
Fri Aug 3 22:41:52 EDT 2012


Currently, PSM supports using mpirun_rsh as the launcher with slurm.  We
recommend to use the PSM interface because it provides the best
performance and scalability.  In order to use this you will need to
remove the --with-pm=no and --with-pmi=slurm options that you are
currently using.

At this time, there is no option to remove the error messages when not
using the PSM interface.  We will look into adding this.

On Fri, Aug 03, 2012 at 07:33:14PM +0000, Gunter, David O wrote:
> We have a slurm-based system running Livermore's CHAOS stack. This system also uses the Qlogic Infinipath card and I know that we should be configuring mvapich2-1.8 with the '-with-device=ch3:psm' option, but when we do this we get an error that slurm is not supported.  So we configure the old way, '--with-pmi=slurm  --with-pm=no --with-rdma=gen2' and things seem to work.  However, when testing via the OSU benmarks we see a huge output of warning messages tell us to re-configure the library with the '--with-device=ch3:psm' configure option. (See below)
> 
> Is there a way to either use the ch3:psm device with slurm or is there a way to turn off this message to keep it from appearing?
> 
> Thanks,
> david
> 
> (dog at ml005 12%) srun -N 16 -n 64 osu_mbw_mr
> [ml005.localdomain:mpi_rank_0][rdma_find_network_type] QLogic IB card detected in system
> [ml005.localdomain:mpi_rank_0][rdma_find_network_type] Please re-configure the library with the '--with-device=ch3:psm' configure option for best performance
> [ml027.localdomain:mpi_rank_63][rdma_find_network_type] QLogic IB card detected in system
> [many lines removed]
> [ml017.localdomain:mpi_rank_21][rdma_find_network_type] Please re-configure the library with the '--with-device=ch3:psm' configure option for best performance
> # OSU MPI Multiple Bandwidth / Message Rate Test v3.6
> # [ pairs: 32 ] [ window size: 64 ]
> # Size                  MB/s        Messages/s
> 1                     116.11      116111578.70
> 2                     270.93      135466560.35
> 4                     542.55      135637685.02
> 8                    1066.58      133321971.01
> 16                   2150.17      134385710.14
> 32                   4343.15      135723409.57
> 64                   8584.57      134133894.32
> 128                 16496.80      128881239.19
> 256                 31527.22      123153184.11
> 512                 57838.59      112965999.37
> 1024                95724.16       93480624.57
> 2048               171346.90       83665477.67
> 4096               259874.23       63445857.09
> 8192               345351.12       42157119.12
> 16384              351755.78       21469469.11
> 32768              264519.29        8072488.10
> 65536              331644.50        5060493.44
> 131072             356726.41        2721606.55
> 262144             337601.17        1287846.27
> 524288             331887.68         633025.51
> 1048576            334439.17         318946.04
> 2097152            334085.32         159304.30
> 4194304            313676.98          74786.42
> (dog at ml005 13%) 
> 
> --
> David Gunter
> HPC-3: Infrastructure Team
> Los Alamos National Laboratory
> 
> 
> 
> 
> 
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> 

-- 
Jonathan Perkins
http://www.cse.ohio-state.edu/~perkinjo


More information about the mvapich-discuss mailing list