[mvapich-discuss] Sharing memory among processes?

Karl W. Schulz karl at tacc.utexas.edu
Tue May 29 08:24:51 EDT 2007


Hello Tahir,

Just in case you are not aware, I just wanted to also add that you can
also use OpenMP on-node which is also a standard - it uses threads
directly but has user friendly semantics to take care of locking and
private variable scope for you.  It is quite portable (any decent
compiler should support OpenMP directives) and with the onslaught of
multi-core, we are seeing a lot of applications returning to a hybrid
approach with a mix of OpenMP/MPI.  That is, OpenMP is used on either a
per socket or per node basis, and then MPI is used for off-node
communications.

Cheers,

Karl 

> -----Original Message-----
> From: mvapich-discuss-bounces at cse.ohio-state.edu
[mailto:mvapich-discuss-bounces at cse.ohio-state.edu]
> On Behalf Of Sylvain Jeaugey
> Sent: Tuesday, May 29, 2007 2:44 AM
> To: Tahir Malas
> Cc: mvapich-discuss at cse.ohio-state.edu
> Subject: Re: [mvapich-discuss] Sharing memory among processes?
> 
> Hi Tahir,
> 
> This is a complex and classical parallelism issue. You have a lot of
ways
> to do that, but none seems easy to me (the easy ones often have a lot
of
> drawbacks).
> 
> The first one is a SYSV shared memory segment. See the man pages of
> shmget/shmat. The issue is that the maximum size by default is quite
> small, and you will need to increase it if you are working on a big
array.
> 
> The second one is to use threads. Allocate the array in the main
thread,
> then create a couple of other threads which will share the array.
> 
> But both methods are bad solutions in my opinion, because your program
> won't be extensible to multiple machines (what if you want to run on
16
> cores ?) and also, if you are using MPI, it may conflict. Also, in
both
> cases, you will need to ensure proper locking.
> 
> So, the best solution in my opinion, if you want to write a
multi-platform
> multi-architecture scalable program is to use MPI 2 Onesided
operations.
> Create the array on one process then open a window to enable other
> processes to read/write in this memory area. Better, split your array
into
> N parts (N being the number of processes) then make all processes
share
> its own part. This is a lot better since all processes will talk to
each
> other with a regular scheme instead of having all processes talking to
> only one, creating a bottleneck (same applies if you are running on
> multiple machines). MPI 2 Onesided operations also provide the
appropriate
> locking functions.
> If you want good performance, try to MPI_Put/MPI_Get big chunks before
> computing on them to improve performance.
> 
> There has been a long debate to know whether MPI 2 Onesided operations
are
> suited for intra-node exchanges. It may not be the case however,
because
> they require to have local copies of data (instead of working in
place)
> which make you do extra memory copies. Doing MPI + threads may be
better
> to avoid this penalty, but thread-safe MPI libraries are rare and not
> always very efficient (locking hurts !).
> 
> Hope this helps,
> 
> Sylvain
> 
> On Tue, 29 May 2007, Tahir Malas wrote:
> 
> > Hi all,
> > Is there an easy way of sharing information (i.e., 2-D array) among
the
> > processes in a node? All our 8 processes in a dual quad-core system
holds
> > the same array, and we want processes in the same node to simply
share this
> > array. We use intel fortran compilers.
> > Thanks in advance,
> > Tahir Malas
> > Bilkent University
> > Electrical and Electronics Engineering Department
> > Phone: +90 312 290 1385
> >
> >
> >
> > _______________________________________________
> > mvapich-discuss mailing list
> > mvapich-discuss at cse.ohio-state.edu
> > http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> >
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss



More information about the mvapich-discuss mailing list