[mvapich-discuss] MPI_Cart_Create

amith rajith mamidala mamidala at cse.ohio-state.edu
Wed May 14 11:18:11 EDT 2008


Hi Chris,

For using tcp, you can try it out with gnu first.

Thanks,
Amith

On Wed, 14 May 2008 rcbord at wm.edu wrote:

> Amith,
>    Should I use the default gnu compilers or do you want me
> to build with pgi?
>
> Chris Bording
> Application Analyst
> High Performance Computing Group
> Information Technology
> The College of William and Mary
> (757)-221-3488
> rcbord at wm.edu
>
> On Wed, 14 May 2008, amith rajith mamidala wrote:
>
> > Hi Chris,
> >
> > Can you try out mvapich1 with tcp? You can use make.mvapich.tcp to
> > compile. This way we will narrow down the code path causing the problem.
> >
> > Thanks,
> > Amith
> >
> > On Wed, 14 May 2008 rcbord at wm.edu wrote:
> >
> >> Amith,
> >>    I was able to test mvapich1 but I get the same bad results
> >> MPI_Cart_Create is returning two communication values.
> >>
> >> When I add the environment variable VIADEV_USE_SHMEM_COLL=0
> >> and the MPI_sendrecv function changes the dest and source values
> >> that are set by the mpi_cart_shift function.
> >> Note these errors only occur with F90. I have C/C++ codes
> >> that work correctly.
> >>
> >> I can re-compile any or all mvapich-0.9.9, mvapich1 and mvapich2
> >> easily enough using the make.mvapich.gen2 scripts.  What flags
> >> should I be using for the pgi-7.0 compiler?  I can update the
> >> compiler too if necessary.
> >>
> >>
> >> Chris Bording
> >> Application Analyst
> >> High Performance Computing Group
> >> Information Technology
> >> The College of William and Mary
> >> (757)-221-3488
> >> rcbord at wm.edu
> >>
> >> On Wed, 7 May 2008, amith rajith mamidala wrote:
> >>
> >>> Hi,
> >>>
> >>> Can you try these two options?
> >>>
> >>> 1. use MVAPICH-1.0 and see if you are seeing the issue.
> >>>
> >>> 2.Can you run the code by passing the environment variable:
> >>> VIADEV_USE_SHMEM_COLL=0 with mvapich
> >>>
> >>>
> >>> Thanks,
> >>> Amith
> >>>
> >>>
> >>> On Wed, 7 May 2008 rcbord at wm.edu wrote:
> >>>
> >>>> Hi,
> >>>>    I am having an issue with the MVAPICH-0.9.9 compiled with PGI-7.0 with
> >>>> ofed 1.2 for infinaband.  We have be able to use it for 6 months without
> >>>> any problem. I tried to port a users fortran code that runs under
> >>>> solaris-sparc without any problems. The code is fairly vanilla fotran-90
> >>>> with mpi I know because I wrote it.  When I tried to port it to the Linux
> >>>> Cluster it hangs a mpisendrecv function.  I included a simple test code
> >>>> that shows that the new communicator (comm2d) returned by the
> >>>> MPI_Cart_create function has multiple values.  Same code run on the
> >>>> sun sparc cluster returns a single value.  I am guessing it could be how
> >>>> I compiled MVAPICH, but have tried a few more flags without any success.
> >>>>   I tried this with MVAPICH2 also and got the same error, but I compiled it
> >>>> in a similar fashion.  Could it be a PGI problem?
> >>>>
> >>>>
> >>>>    Has anyone else seen this?  Thanks for any help!
> >>>>
> >>>> Program mpicart
> >>>> ! With input files
> >>>> ! Conversion of efit2d.f90 to 3d JPB 11-2007
> >>>>
> >>>>    Implicit none
> >>>>
> >>>>    include "mpif.h"
> >>>>
> >>>>    Interface
> >>>>       Subroutine read_model(Pdim0,Pdim1)
> >>>>         Integer, Intent(out) ::Pdim0,Pdim1
> >>>>       end Subroutine read_model
> >>>>    end Interface
> >>>>
> >>>>    Double Precision :: t1,t2
> >>>> !
> >>>> ! define MPI variables
> >>>> !
> >>>>    Integer :: Pid, N_proc, ierr
> >>>>    Integer :: comm2d = 0
> >>>>    Integer :: status
> >>>>    Integer :: period(2),Pdim_size(2),coords(2)
> >>>>    Integer :: nbrleft,nbrright,nbrtop,nbrbottom
> >>>>    Integer :: Pdim0, Pdim1
> >>>>    Integer :: i, t, numt
> >>>>
> >>>>    call MPI_INIT(ierr)
> >>>>    call MPI_COMM_RANK(MPI_COMM_WORLD,Pid,ierr)
> >>>>    call MPI_COMM_SIZE(MPI_COMM_WORLD,N_proc,ierr)
> >>>>
> >>>>    if (Pid .EQ. 0) then
> >>>>       call read_model(Pdim0,Pdim1)
> >>>>
> >>>>    end if
> >>>> !
> >>>> ! Broad cast input parameters values to all processors
> >>>> !
> >>>>    call MPI_Bcast(Pdim0,1,MPI_INTEGER,0, &
> >>>>         &  MPI_COMM_WORLD,ierr)
> >>>>    call MPI_Bcast(Pdim1,1,MPI_INTEGER,0, &
> >>>>         &  MPI_COMM_WORLD,ierr)
> >>>>
> >>>>    call MPI_BARRIER(MPI_COMM_WORLD,ierr)
> >>>>
> >>>>    Pdim_size(1) = Pdim0
> >>>>    Pdim_size(2) = Pdim1
> >>>>
> >>>>    period(1) = 0
> >>>>    period(2) = 0
> >>>>
> >>>>    call MPI_Dims_create(N_proc,2,Pdim_size,ierr)
> >>>>
> >>>>    call MPI_Cart_create(MPI_COMM_WORLD,2,Pdim_size,period,.true.,comm2d,ierr)
> >>>>    do i = 0,N_proc-1
> >>>>       if (Pid == i) then
> >>>>          write(*,*) 'pid ',Pid,' mpi_comm_2d ',comm2d
> >>>>       end if
> >>>>    end do
> >>>>
> >>>>    call MPI_BARRIER(MPI_COMM_WORLD,ierr)
> >>>>    call MPI_Comm_free(comm2d,ierr)
> >>>>    call MPI_FINALIZE(ierr)
> >>>>
> >>>> end program mpicart
> >>>>
> >>>> ###################### OUTPUT #########################
> >>>>
> >>>> Model parameters for mpicart test
> >>>>
> >>>>    Processor Topology is (            3  by             4 )
> >>>>
> >>>>   pid             0  mpi_comm_2d           140
> >>>>   pid            11  mpi_comm_2d           138
> >>>>   pid             2  mpi_comm_2d           140
> >>>>   pid             4  mpi_comm_2d           140
> >>>>   pid             6  mpi_comm_2d           140
> >>>>   pid             8  mpi_comm_2d           140
> >>>>   pid             3  mpi_comm_2d           138
> >>>>   pid            10  mpi_comm_2d           140
> >>>>   pid             1  mpi_comm_2d           138
> >>>>   pid             5  mpi_comm_2d           138
> >>>>   pid             7  mpi_comm_2d           138
> >>>>   pid             9  mpi_comm_2d           138
> >>>>
> >>>>
> >>>> Chris Bording
> >>>> Application Analyst
> >>>> High Performance Computing Group
> >>>> Information Technology
> >>>> The College of William and Mary
> >>>> (757)-221-3488
> >>>> rcbord at wm.edu
> >>>> _______________________________________________
> >>>> mvapich-discuss mailing list
> >>>> mvapich-discuss at cse.ohio-state.edu
> >>>> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> >>>>
> >>>
> >>>
> >>>
> >>
> >
> >
>



More information about the mvapich-discuss mailing list