[mvapich-discuss] MVAPICH causes segmentation fault

Subramoni, Hari subramoni.1 at osu.edu
Tue Dec 3 07:13:39 EST 2019


Dear, Augustin.

Thanks for bringing this up again. We will try to fix this for the coming release.

Best,
Hari.

From: AUGUSTIN DEGOMME <augustin.degomme at univ-grenoble-alpes.fr>
Sent: Tuesday, December 3, 2019 6:14 AM
To: Subramoni, Hari <subramoni.1 at osu.edu>
Cc: Luo, Ye <yeluo at anl.gov>; mvapich-discuss at cse.ohio-state.edu <mvapich-discuss at mailman.cse.ohio-state.edu>
Subject: Re: [mvapich-discuss] MVAPICH causes segmentation fault

Hi,

Just to say that we would be very interested in a fix for this on the mvapich side. We use aligned_alloc, and need a workaround to default to posix_memalign with a specific flag for our docker builds, but this should not be necessary or crash when users try to compile with mvapich. Hope this can be addressed soon.

Best regards,
Augustin
________________________________
De: "Hari Subramoni" <subramoni.1 at osu.edu<mailto:subramoni.1 at osu.edu>>
À: "Luo, Ye" <yeluo at anl.gov<mailto:yeluo at anl.gov>>, "mvapich-discuss at cse.ohio-state.edu<mailto:mvapich-discuss at cse.ohio-state.edu>" <mvapich-discuss at mailman.cse.ohio-state.edu<mailto:mvapich-discuss at mailman.cse.ohio-state.edu>>
Envoyé: Mardi 9 Juillet 2019 04:16:50
Objet: Re: [mvapich-discuss] MVAPICH causes segmentation fault

Dear, Ye.

Thank you for bringing this to our attention. We appreciate it. So far, we have not received any issues about users wanting to use “aligned_alloc”. We will see how to handle it in our code and get back to you.

Some history about this feature is given below.

As you may know, (barring a few exceptions) any buffer that an InifniBand HCA can act upon must be registered with it ahead of time. Since registration for InfiniBand is very expensive we attempt to cache these registrations so if the same buffer is re-used again for communication it will already be registered (speeding up the application). The reason why MVAPICH2 (and several other MPI libraries like OpenMPI – please refer to https://www.open-mpi.org/faq/?category=openfabrics#large-message-leave-pinned; https://www.open-mpi.org/papers/euro-pvmmpi-2006-hpc-protocols/euro-pvmmpi-2006-hpc-protocols.pdf) intercept malloc and free routines is to allow correctness while caching these InfiniBand memory registrations (since the MPI library needs to know if the memory is being freed etc).

Whether disabling registration cache will have a negative effect on application performance depends entirely on the communication pattern of the application. If the application uses mostly small to medium sized messages (approximately less than 16 KB), then disabling registration cache will mostly have no impact on the performance of the application.

The following section of the userguide has more information about the impact of disabling memory registration cache on application performance.

http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3.1-userguide.html#x1-1340009.1.3

This can be disabled at runtime by setting “MV2_USE_LAZY_MEM_UNREGISTER=0”. The following section of the userguide has more information about this parameter.

http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3.1-userguide.html#x1-26100011.81

This can be disabled at configuration time, the “--disable-registration-cache” parameter can be used. The following section of the userguide has more information about this parameter.

Best,
Hari.

From: mvapich-discuss-bounces at cse.ohio-state.edu<mailto:mvapich-discuss-bounces at cse.ohio-state.edu> On Behalf Of Luo, Ye
Sent: Monday, July 8, 2019 8:40 PM
To: mvapich-discuss at cse.ohio-state.edu<mailto:mvapich-discuss at cse.ohio-state.edu> <mvapich-discuss at mailman.cse.ohio-state.edu<mailto:mvapich-discuss at mailman.cse.ohio-state.edu>>
Subject: [mvapich-discuss] MVAPICH causes segmentation fault


Hi all,

I recently investigated an issue on Cooley at ALCF about MVAPICH.

I wrote my analysis at
https://github.com/QMCPACK/qmcpack/issues/1703

There may be a historical reason why MVAPICH ships customized memory routines which are not compatible with the OS.

Since they are now causing problems, a fix will be needed.

Please have a look. Thank you!

Ye
===================
Ye Luo, Ph.D.
Computational Science Division& Leadership Computing Facility
Argonne National Laboratory

_______________________________________________
mvapich-discuss mailing list
mvapich-discuss at cse.ohio-state.edu<mailto:mvapich-discuss at cse.ohio-state.edu>
http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20191203/940d1bc2/attachment.html>


More information about the mvapich-discuss mailing list