[mvapich-discuss] RE: Po

Darius Buntinas buntinas at mcs.anl.gov
Tue Feb 21 16:10:16 EST 2006


MIPS does have ll/sc instructions, so you should be able to write a lock 
using this.

Look in the linux kernel source (include/asm-mips/spinlock.h) for an 
example of a spin lock.  There's also bitops.h in the same directory that 
has things like test-and-set.

Darius

On Tue, 21 Feb 2006, Choudhury, Durga wrote:

> Hi Darius
>
> Thanks for the reply. Yes, you are in principle right; but the
> difficulty is that operations like atomic memory swap is easy to
> implement in x86 architecture because there are dedicated instructions
> available for this kind of thing. MIPS, however, is a RISC architecture
> and dealing with memory directly (except for loads and stores) are not
> allowed, and this functionality is exactly is what is needed to get the
> ssm channel to run. This is not to say that it cannot be done (one way
> could be to disable the timer interrupt while doing this), but I do not
> have enough experience to do this in the best possible way, and hence I
> need advice.
>
> Best regards
> Durga
>
> -----Original Message-----
> From: Darius Buntinas [mailto:buntinas at mcs.anl.gov]
> Sent: Tuesday, February 21, 2006 12:10 PM
> To: Choudhury, Durga
> Cc: mvapich-discuss at cse.ohio-state.edu
> Subject: Re: [mvapich-discuss] RE: Po
>
>
> I'm not very familiar with the ssm channel, so I can't say for sure, but
>
> I think it should just be a matter of implementing the missing
> functions.
>
> There are probably a handful of such functions that need an
> architechture
> specific implementation in order to be efficient, and so don't have a
> default implementation.
>
> Asking on mpich2-maint may get you more answers.
>
> Darius
>
> On Tue, 21 Feb 2006, Choudhury, Durga wrote:
>
>> Hi Wei
>>
>> Thanks for the reply. Our nodes are NOT connected by infiniband but by
> a
>> proprietary backplane switch fabric, and this is why we cannot port
> the
>> OSU gen2 or VAPI channel directly. We are in the process of
> implementing
>> uDAPL for our switch fabric after which we can run MPI over uDAPL, but
>> the uDAPL specs are so infiniband centric that we are having a hard
> time
>> doing it.
>>
>> As of now, we have a TCP/IP stack running over our switch fabric and
>> this is how we currently run MPI over the ch3:sock device, with
> limited
>> performance. Changing the device to ch3:ssm ought to give us a
>> significant performance boost.
>>
>> I'll try asking the question in the ANL mailing list; in the
> meanwhile,
>> any leads from the users of this list will be highly appreciated.
>>
>> Thanks
>> Durga
>>
>> -----Original Message-----
>> From: wei huang [mailto:huanwei at cse.ohio-state.edu]
>> Sent: Tuesday, February 21, 2006 10:44 AM
>> To: Choudhury, Durga; mvapich-discuss at cse.ohio-state.edu
>> Subject: Po
>>
>>> Here is a question that I *think* is non-trivial, and again, any help
>> is
>>> deeply appreciated. Has anyone tried using ch3 device ssm channel on
>>> anything other than an Intel platform? (This is the channel that uses
>>> sockets for remote communication and shared memory for local
>>> communication). Our architecture is a cluster of SMP MIPS64 cores, so
>>> this device/channel would be ideal. However, in the inline function
>>> MPIDU_Process_lock() defined in
>>> src/mpid/common/locks/mpidu_process_locks.h, the three #ifdefs
>>> (HAVE_INTERLOCKEDEXCHANGE, HAVE__INTERLOCKEDEXCHANGE and
>>> HAVE_COMPARE_AND_SWAP) all seem to be defined only for intel
> processors
>>> (actually the first two seem to be defined only for the IA64
>>> architecture.)
>>
>> Hi Durga,
>>
>> Not sure if you really mean to use socket for inter-node
> communication.
>>
>> As you know, MVAPICH2 uses InfiniBand for inter-node communication,
> this
>> will be a much better way if your cluster is equipped with InfiniBand.
>> MVAPICH2 also has shared memory for intra-node operations. So it would
>> be
>> ideal for you to choose: osu_ch3:vapi or osu_ch3:gen2 if you have
>> InfiniBand. (SMP should be used by default)
>>
>> ssm channel is with the original MPICH2 package from Argonne national
>> lab,
>> on which our MVAPICH2 is based. If you really mean to use that device
>> instead of devices under osu_ch3, you can post questions to
>> mpich2-maint at mcs.anl.gov or mpich-discuss at mcs.anl.gov.
>>
>> Thanks.
>>
>> Regards,
>> Wei Huang
>>
>> 774 Dreese Lab, 2015 Neil Ave,
>> Dept. of Computer Science and Engineering
>> Ohio State University
>> OH 43210
>> Tel: (614)292-8501
>>
>>
>>
>>
>>
>> _______________________________________________
>> mvapich-discuss mailing list
>> mvapich-discuss at cse.ohio-state.edu
>> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>>
>>
>
>


More information about the mvapich-discuss mailing list