[mvapich-discuss] fortran system calls crash mpi

Michael Harding harding at uni-mainz.de
Sun Oct 1 13:52:46 EDT 2006


hi,

I try to use the installed mvapich on lonestar2
(http://www.tacc.utexas.edu/services/userguides/lonestar2/) together
with our quantum chemical program package aces2 ( http://www.aces2.de ).
On lonestar2 they run the generation 2 stack called OFED (OpenFabrics
Enterprise Edition) together with mvapich. After quite a long time I
found out that doing a system call from fortran ends in porblems with
the mpi:

i tried the following small and stupid program: 

program barrtest 
USE IFPORT 
include 'mpif.h' 

integer*4 mpierr 
integer*4 mpirank 
integer*4 i 

call MPI_INIT(mpierr) 
write(*,*)"after init" 
call mpi_comm_rank(MPI_COMM_WORLD, mpirank, mpierr) 
write(*,*)"after comm_rank",mpirank 
call MPI_BARRIER(MPI_COMM_WORLD, mpierr) 
write(*,*)"after barrier",mpirank 
if (mpirank.eq.1) then 
i=systemqq('pwd') 
endif 
call MPI_BARRIER(MPI_COMM_WORLD, mpierr) 
write(*,*)"after barrier2",mpirank 
call mpi_finalize(mpierr) 
end 

This explained to me why we cannot run our program suite currently on
this computer. The same happens if I do not use IFORT and system instead
of systemqq. 

So my questions are:

Is this normal for mvapich or is this related to local (tacc , utexas)
modifications of the code ? 
I know that scali mpi (also an infiniband cluster) had no problems
running our code. If this is a general problem with mvapich, when one
can expect a fix for that ?

Thanks for any reply on that ! I also appreciate any hint how i can
manage this problem without having system calls. ( I need especially a
replacement for copy.)

I now tried to get the our code working there for three weeks ...
( even on a completely unknown system i had taken me never more than
three days to get it work before )


michael




More information about the mvapich-discuss mailing list