[mvapich-discuss] Intel AMD run

yogeshwar sonawane yogyas at gmail.com
Sat Aug 2 03:47:08 EDT 2008


Hi lei,

Thanks for the reply. It is working now.
As per suggestion, we are using same binary compiled on either one
machine, shared on other.
Earlier we were compiling both mvapich2 & pallas separately on both
the machines.

Now, any recommendations related to, on which machine should we
compile (intel OR amd) ?
OR any one will do ? any previous observation about performance
difference because of platform
specific thresholds for performance optimizations ?

One more pt, as jasjit said,

>> I tried one other variation by compiliing MPI (udapl) without two flags
>> namely _SMP_ and RDMA_FAST_PATH.
>> Then also it was running finely.
>> So does it have anything to do with RDMA_FAST_PATH flag ?

The above compilation was done separately on both the machines, still
it is working.
Any thing related to this ?

With regards,
Yogeshwar

On Sat, Aug 2, 2008 at 3:25 AM, Lei Chai <chai.15 at osu.edu> wrote:
> Hi Jasjit,
>
> I just tried IMB-3.0 (the new PALLAS) and mvapich2 with one Intel and one
> AMD machine and it was running fine. How did you compile mvapich2 and the
> program? If you compile on one machine and try to run the same program
> through a shared file system (e.g. NFS) then there shouldn't be any problem.
> If you compile two versions, one on Intel and one on AMD and try to run them
> together , then you are likely to observe hanging, since we have platform
> specific thresholds for performance optimizations, and therefore we don't
> recommend you to do so. If you continue to see the problem, please send us
> your output file (I didn't see it in your previous email).
>
> Thanks,
> Lei
>
>
> jasjit singh wrote:
>>
>> Hi
>>
>> I am running PALLAS v2.2 over mvapich2-1.0.1.
>> We have Silverstorm's Infiniband cards.
>> I am using OFED-1.2.5.3.
>> I have tried with both gen2 and udapl stacks. Both give the same result
>> for all my runs.
>> OS is RHEL4-U5  2.6.9-55.ELlargesmp
>>
>> First I ran it between two Intel (Xeon) machines with number of processes
>> equal to two. It went through successfully.
>>
>> Then I ran between two AMD (Opteron) machines with the same number of
>> processes. It also went through.
>>
>> Thereafter I tried between one Intel machine and one AMD machine. Then it
>> didn't run. It was stuck at the very start(Output file is attached).
>>
>> Has anybody tried this kind of thing earlier?
>>
>> I have also tried, between Intel and AMD, a DAPL level application that
>> does dat_ep_post_rdma_write() continuously in a bidirectional manner. This
>> was running finely.
>>
>> So...
>> Has MPI something specific to Intel and/or AMD architectures ?
>> Can I do some work around to make it run?
>> Or I am not supposed to run this across different architectures ?
>>
>> I tried one other variation by compiliing MPI (udapl) without two flags
>> namely _SMP_ and RDMA_FAST_PATH.
>> Then also it was running finely.
>> So does it have anything to do with RDMA_FAST_PATH flag ?
>>
>> Thanks in advance,
>> Jasjit Singh
>>
>> ------------------------------------------------------------------------
>> Not happy with your email address?
>> Get the one you really want <http://uk.docs.yahoo.com/ymail/new.html> -
>> millions of new email addresses available now at Yahoo!
>> <http://uk.docs.yahoo.com/ymail/new.html>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> mvapich-discuss mailing list
>> mvapich-discuss at cse.ohio-state.edu
>> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>>
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>


More information about the mvapich-discuss mailing list