[mvapich-discuss] MVAPITCH environment variables for preconnecting paths and for process-to-core mappings

Oppe, Thomas C ERDC-RDE-ITL-MS Contractor Thomas.C.Oppe at erdc.dren.mil
Fri Jul 24 10:51:13 EDT 2020


Dear Sir:

I am working with MVAPITCH2 libraries on a Cray CS500 with 64 1.5 GHz AMD EPYC 7542 Rome cores per node, 2 sockets per node, 32 cores per socket.

Question 1:  Is there an environment variable for pre-computing all possible pairwise paths from each rank to every other rank rather than computing the paths dynamically as needed when a communication request is made?  Many other MPI implementations have this feature, such as

Intel MPI:  export I_MPI_DYNAMIC_CONNECTION=0

Cray MPICH:  export MPICH_GNI_DYNAMIC_CONN=disabled

HPE/SGI MPT:  export MPI_CONNECTIONS_THRESHOLD=<val>, where <val> is greater than the number of ranks being used.

Open MPI:  export OMPI_MCA_mpi_preconnect_mpi=1

I think the variable for MVAPITCH may be

export MV2_ON_DEMAND_THRESHOLD=<val>, where <val> is greater than the number of ranks being used.  Is that correct?

Question 2:  Is there an MVAPITCH environment variable for a process-to-core placement mapping?  For example, if I want to space out the processes on a node in the case that a node is only 1/4 populated, say putting 16 processes on a 64-core node, I can do this:

Intel MPI:  export I_MPI_PIN_PROCESSOR_LIST="shift=4"

Cray MPICH:   aprun -d 4 -n <procs>

HPE/SGI MPT:  export MPI_DSM_CPULIST="0-60/4:allhosts"   or
                           export MPI_DSM_CPULIST="0,4,8,12,16,...,60:allhosts"

Open MPI:  not sure, maybe
export OMPI_MCA_rmaps_base_mapping_policy=ppr:16:node:PE=1
export OMPI_MCA_rmaps_base_n_pernode=16

What is the MVAPITCH environment variable to process pinning and process-to-core mapping?

Thank you for any information.

Tom Oppe

-----------------------------------------------------------------------------
Thomas C. Oppe
HPCMP Benchmarking Team
HITS Team SAIC
Thomas.C.Oppe at erdc.dren.mil
Work:  (601) 634-2797
Cell:    (601) 642-6391
-----------------------------------------------------------------------------



More information about the mvapich-discuss mailing list