[mvapich-discuss] mvapich1 roadmap if any

Walid walid.shaari at gmail.com
Fri Mar 23 03:51:01 EDT 2012


Dear Dhabaleswar Panda,

Thanks for your prompt response, find below what i can answer right now, i
need to get back to you later this Sunday for more details

On 21 March 2012 15:55, Dhabaleswar Panda <panda at cse.ohio-state.edu> wrote:

>
>
> I have a few questions/feedbacks for you:
>
> - Which version of MVAPICH2 you are using: 1.7 or 1.8a2 is recommended.
>
> 1.7 compiled with intel 10


> - What IB adapter you have - Mellanox or QLogic.
>
>
both, mostly Qlogic, and one/two  clusters Mellanox


> - Which interface you are using: OFA-IB-CH3 is the `default' and
>  `recommended one' for Mellanox adapter. Similarly, PSM-CH3 is
>  recommended one for QLogic adapters.
>
> The configuration options we used before is as follows  for Mellanox
adapter :

./configure --prefix=/usr/local/mpi/mvapich2/intel10/1.7
--with-device=ch3:psm --enable-g=dbg --enable-romio --enable-debuginfo
-with-file-system=panfs+nfs+ufs --with-psm-include=/usr/include
--with-psm=/usr/lib64

The configuration options for Qlogic are:

./configure --prefix=/usr/local/mpi/mvapich2/intel10/1.7  --enable-g=dbg
--enable-romio --enable-debuginfo -with-file-system=panfs+nfs+ufs
--with-rdma=gen2 --with-ib-libpath=/usr/lib64

we do need debugging with totalview, and we do need access to MPI-IO over
nfs, and panasas file systems


- Are you using `mpirun_rsh' (recommended one) for job launching?
>
> Yes

> - Are you using `LIMIC2' option for intra-node communication? This will
>  provide you very good performance if your applications are using medium
>  and large messages.
>
> NO, not aware of what that is, that is one of our problems we are usually
behind how  to find out which feature, or option to use beside the user
manual.

> - Are you using the `Hybrid' feature of MVAPICH2? For large jobs
>  (1K cores and higher), this feature should give you good performance
>  and scalability with reduced memory footprint.
>
> No, it is a run time option?


> These are some general guidelines. In addition to the above points, it
> will be good if you can let us know the configuration flags you are using
> for configuring MVAPICH2 and any runtime parameters you are using when
> running your job.


RSHCOMMAND=/usr/bin/ssh
LD_LIBRARY_PATH=/usr/local/mpi/mvapich2/intel10/1.7/lib:/usr/local/intel/10.1.00
8/cce/10.1.008//lib/:/usr/local/intel/10.1.008/fce/10.1.008//lib
VIADEV_USE_SHMEM_BCAST=0
VIADEV_USE_SHMEM_ALLREDUCE=0
VIADEV_USE_SHMEM_REDUCE=0
VIADEV_USE_SHMEM_BARRIER=0
VIADEV_USE_SHMEM_ALLGATHER=0
VIADEV_CLUSTER_SIZE=LARGE
VIADEV_SMPI_LENGTH_QUEUE=1025
VIADEV_SMP_EAGERSIZE=512
VIADEV_USE_AFFINITY=1
VIADEV_CPU_MAPPING=0,1,2,3,4,5,6,7,8,9,10,11,12
SMP_SEND_BUF_SIZE=256000

much appreciated, thank you in advance,

regards

Walid
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20120323/c92fed52/attachment.html


More information about the mvapich-discuss mailing list