[mvapich-discuss] Best configure / environment settings for Mellanox QDR with RH6.6-native InfiniBand support?

Chris Green greenc at fnal.gov
Wed Jan 14 10:52:10 EST 2015


Hi,

This is a question separated out from my previous issue ("Compilation 
error for mvapich2-2.0.1 with disabled C++ bindings"), per Jonathan's 
suggestion.

A scientific collaboration with which we work is using Mellanox QDR 
cards as part of a multi-node / multi-core data acquisition / processing 
chain developed by us using MPI, and we have development systems using 
them also. Until recently we have been using OFED1.5.4.1 with mvapich 
1.9 on SLF6.3-ish (Scientific Linux Fermi is a RHEL variant), but we are 
switching to using the RHEL6.6-native InfiniBand drivers and support 
libraries and are therefore in the position of building mvapich 
ourselves (and providing recommendations on build and use thereof to our 
collaborators).

Given that we know that the mvapich libraries will be linked to code 
compiled using compilers other than the system's native GCC (usually 
more modern versions of GCC), we had to choose between tying the mvapich 
build to a particular GCC version or deactivating the C++ bindings. 
Since we don't use them for this application, we chose the latter. Here 
then, is what we have for a configure command:

./configure --prefix=/usr/local/mvapich2-2.0.1 --enable-fast=O3,ndebug --enable-f77 --enable-fc \
             --disable-cxx --enable-romio --enable-versioning --enable-threads=runtime --enable-registration-cache \
             --enable-rsh --enable-shared --enable-static --enable-yield=sched_yield --enable-rdma-cm --with-pm=hydra

Can anyone tell me if there is a better configuration for the use 
outlined above, or anything we should be doing by way of setting 
environment variables or other system configuration to get the best 
bandwidth? In the unenlightened past, we have been in the somewhat 
strange position of getting better inter-node bandwidth than intra-node, 
so I know what we were doing in the OFED era wasn't necessarily optimal. 
Our MPI use is generally centered around MPI_Isend() and MPI_Irecv(), if 
that is relevant.

Thanks for any help you can give,

Chris.

-- 
Chris Green <greenc at fnal.gov>, FNAL CS/SCD/ADSS/SSI/TAC;
'phone (630) 840-2167; Skype: chris.h.green;
IM: greenc at jabber.fnal.gov, chissgreen (AIM, Yahoo),
chissg at hotmail.com (MSNM), chris.h.green (Google Talk).

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20150114/f5750644/attachment.html>


More information about the mvapich-discuss mailing list