[mvapich-discuss] support for Mellanox/Voltaire FCA MPI Collective Accelerator

Dhabaleswar Panda panda at cse.ohio-state.edu
Fri Sep 2 13:27:14 EDT 2011


Hi Jeff,

Sure, we will do it. Thanks for your suggestion. We always incorporate new
features as runtime options so that users and system integrators will have
flexibility to enable/disable these features.

Thanks,

DK


> Dr. Panda,
>
> When you do implement, please make it a run time selectable option.
>
> Thanks,
>
> -Jeff
>
>
> /**********************************************************/
> /* Jeff Konz                          jeffrey.konz at hp.com */
> /* Solutions Architect                   HPC Benchmarking */
> /* Americas Shared Solutions Architecture (SSA)           */
> /* Hewlett-Packard Company                                */
> /* Office: 248-491-7480              Mobile: 248-345-6857 */
> /**********************************************************/
>
>
> > -----Original Message-----
> > From: Dhabaleswar Panda [mailto:panda at cse.ohio-state.edu]
> > Sent: Friday, September 02, 2011 12:05 PM
> > To: Konz, Jeffrey (SSA Solution Centers)
> > Cc: mvapich-discuss at cse.ohio-state.edu
> > Subject: Re: [mvapich-discuss] support for Mellanox/Voltaire FCA MPI
> > Collective Accelerator
> >
> > MVAPICH2 currently does not support this. However, support for native
> > collective offload and MPI non-blocking calls using Mellanox CORE-
> > Direct
> > feature will be available in upcoming MVAPICH2 releases.
> >
> > Thanks,
> >
> > DK
> >
> > > Does MVAPICH2 support the Mellanox/Voltaire FCA MPI Collective
> > Accelerator?
> > >
> > > If so, is this run time selectable?
> > >
> > > Thanks,
> > >
> > > -Jeff
> > >
> > > _______________________________________________
> > > mvapich-discuss mailing list
> > > mvapich-discuss at cse.ohio-state.edu
> > > http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> > >
>
>



More information about the mvapich-discuss mailing list