[mvapich-discuss] Po

wei huang huanwei at cse.ohio-state.edu
Tue Feb 21 12:44:09 EST 2006


>Here is a question that I *think* is non-trivial, and again, any help is
>deeply appreciated. Has anyone tried using ch3 device ssm channel on
>anything other than an Intel platform? (This is the channel that uses
>sockets for remote communication and shared memory for local
>communication). Our architecture is a cluster of SMP MIPS64 cores, so
>this device/channel would be ideal. However, in the inline function
>MPIDU_Process_lock() defined in
>src/mpid/common/locks/mpidu_process_locks.h, the three #ifdefs
>(HAVE_INTERLOCKEDEXCHANGE, HAVE__INTERLOCKEDEXCHANGE and
>HAVE_COMPARE_AND_SWAP) all seem to be defined only for intel processors
>(actually the first two seem to be defined only for the IA64
>architecture.)

Hi Durga,

Not sure if you really mean to use socket for inter-node communication.

As you know, MVAPICH2 uses InfiniBand for inter-node communication, this
will be a much better way if your cluster is equipped with InfiniBand.
MVAPICH2 also has shared memory for intra-node operations. So it would be
ideal for you to choose: osu_ch3:vapi or osu_ch3:gen2 if you have
InfiniBand. (SMP should be used by default)

ssm channel is with the original MPICH2 package from Argonne national lab,
on which our MVAPICH2 is based. If you really mean to use that device
instead of devices under osu_ch3, you can post questions to
mpich2-maint at mcs.anl.gov or mpich-discuss at mcs.anl.gov.

Thanks.

Regards,
Wei Huang

774 Dreese Lab, 2015 Neil Ave,
Dept. of Computer Science and Engineering
Ohio State University
OH 43210
Tel: (614)292-8501







More information about the mvapich-discuss mailing list