[mvapich-discuss] Porting OSU MPI to a new network fabric

Panda, Dhabaleswar panda at cse.ohio-state.edu
Wed Feb 3 20:11:08 EST 2016


Thanks for your note and your interest in porting MVAPICH2 to a new network fabric.
We do not have any `How-To' kind of documentation on this. The code is open-source.
You can take a look at the call stack and implement the lowest level calls
to the new network fabric API.

Thanks,

DK
________________________________________
From: mvapich-discuss-bounces at cse.ohio-state.edu on behalf of dpchoudh . [dpchoudh at gmail.com]
Sent: Tuesday, February 02, 2016 11:26 PM
To: mvapich-discuss at cse.ohio-state.edu
Subject: [mvapich-discuss] Porting OSU MPI to a new network fabric

Hello OSU developers

Perhaps this question is a bit naive, but let me ask it anyway:

How hard would it be to add support for a new network fabric to the
OSU MPI stack? Is there any 'How-To' kind of documentation on this?

To put a bit of context into the question, I believe Texas Instruments
has ported OpenMPI to support their proprietary Hlink and (open
source?) rapidIO networks. Many years back I attempted a similar feat
for the Dune SAND fabric (now part of Broadcom) by implementing a new
BTL to OpenMPI.

Is there any example or documentation of doing something similar with
OSU implementation?

Thank you very much for any guidance.

Durga Choudhury

Life is complex. It has real and imaginary parts.
_______________________________________________
mvapich-discuss mailing list
mvapich-discuss at cse.ohio-state.edu
http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss



More information about the mvapich-discuss mailing list