[mvapich-discuss] MPI IO on Lustre
Ryan Crocker
rcrocker at uvm.edu
Thu Apr 4 13:32:03 EDT 2013
Hi all,
I've been using MPI IO on Lustre and have noticed some odd output files using MVAPICH2-1.4.1 (i know it's old, but that's what we have running). I'm running a turbulent channel and when i look at my velocity statistics the velocities at the edge of the core subdomains are lower, or higher than they should be. So i see spikes in the shear. Also in 2D flows i can see some domain dependence in the outputs, i've run some of the stats during the simulations and i don't see the same issues. I'd rather not consolidate and use standard IO to output files so i'm wondering if i have the MPI IO set up properly, or if they're are some caveats i'm missing.
Init:
mpiiofs = "lustre:"
call MPI_INFO_CREATE(mpi_info,ierr)
call MPI_INFO_SET(mpi_info,"romio_ds_write","disable",ierr)
ouput:
call MPI_FILE_OPEN(comm,file,IOR(MPI_MODE_WRONLY,MPI_MODE_CREATE),mpi_info,iunit,ierr)
! Write header (only root)
if (irank.eq.iroot) then
buffer = trim(adjustl(name))
size = 80
call MPI_FILE_WRITE(iunit,buffer,size,MPI_CHARACTER,status,ierr)
buffer = 'part'
size = 80
call MPI_FILE_WRITE(iunit,buffer,size,MPI_CHARACTER,status,ierr)
ibuffer = 1
size = 1
call MPI_FILE_WRITE(iunit,ibuffer,size,MPI_INTEGER,status,ierr)
buffer = 'hexa8'
size = 80
call MPI_FILE_WRITE(iunit,buffer,size,MPI_CHARACTER,status,ierr)
end if
! Write the data
disp = 3*80+4+0*ncells_hexa*4
call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
call MPI_FILE_WRITE_ALL(iunit,buffer3_hexa(:,1),ncells_hexa_,MPI_REAL_SP,status,ierr)
disp = 3*80+4+1*ncells_hexa*4
call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
call MPI_FILE_WRITE_ALL(iunit,buffer3_hexa(:,2),ncells_hexa_,MPI_REAL_SP,status,ierr)
disp = 3*80+4+2*ncells_hexa*4
call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
call MPI_FILE_WRITE_ALL(iunit,buffer3_hexa(:,3),ncells_hexa_,MPI_REAL_SP,status,ierr)
call MPI_FILE_CLOSE(iunit,ierr)
Thanks,
Ryan Crocker
University of Vermont, School of Engineering
Mechanical Engineering Department
More information about the mvapich-discuss
mailing list