[mvapich-discuss] Segfault when building HDF5 with MVAPICH2 2.1rc1 on SLES11 SP3

Thompson, Matt (GSFC-610.1)[SCIENCE SYSTEMS AND APPLICATIONS INC] matthew.thompson at nasa.gov
Tue Feb 10 14:35:09 EST 2015


All right, here we go. A Linux guru far better than I (Aaron Knister of 
NCCS) investigated this and found:

> I've done a bunch more digging and here's what's happening:
>
> 1. symbol relocation processing occurs for libc
> 2. references to malloc() within libc get bound to mvapich2's malloc
> from libmpi
> 3. ...stuff happens...
> 4. getpwuid is called, which is bound to getpwuid in glibc
> 5. getpwuid does a dlopen of libnss_sss
> 6. getpwuid calls _nss_sss_getpwuid_r from libnss_sss
> 7. Deep within _nss_sss_getpwuid_r asprintf is called
> 8. asprintf is bound to libc and internally calls malloc, which from
> step 2 is bound to mvapich2's malloc
>
> note that what we have here is a pointer to memory that has been
> allocated by mvapich2's malloc implementation
>
> 9. libnss_sss then calls free() to free the memory allocated by
> mvapich2's libmpi
> 10. the dynamic linker then binds free() to glibc
> 11. segfault occurs because we tried to free() memory using a
> different memory management library than the one that was used to
> allocate it
>
> In step 10, the library search order used by the dynamic linker/loader
> is different then in all previous symbol relocation processing.

Then he discovered:

> Ah! I just found this:
>
> If you set the environment variable RTLD_DEEPBIND to 0 then the
> segfault disappears. I'm trying to better understand this, but works
> by altering the behaviour of glibc when loading NSS modules. The
> symbol resolution order seems to change with it set (for the better
> in this case).

And his final word:

> I think I've figured out who is in the "wrong here". This is a result of
> a patch SuSE applied to glibc to dlopen NSS libraries with the
> RTLD_DEEPBIND flag which puts the symbols inside the library ahead of
> the global scope. This results in the behaviour we saw earlier where the
> free() called by the dlopen'd libnss_sss was resolved to the glibc lib
> required by libnss_sss instead of the global scope which resolved and
> bound free() to mvapich2's libmpi.
>
> The environment variable I found works by disabling this behaviour and
> making it the glibc default, where symbols in dlopen()'d libraries
> aren't put ahead of the global scope.
>
> SuSE has a legitimate reason for their patch, but the solution to this
> issue is to set the RTLD_DEEPBIND environment variable to "0" to resolve
> the segfaults.

So, apparently there is a way around the build fail.

I suppose the next question is whether this is just a build setting or 
something that would need to be permanently enabled.

Matt

On 02/06/2015 09:50 AM, Jonathan Perkins wrote:
> Hi Matt.  I'm not very sure what is going on but as a first step can you
> share the output of `mpiname -a' from your MVAPICH2 build?  It may also
> to add the -show option to the Makefile command that builds the
> executable that is failing with the segmentation fault and send that
> output as well.
>
> On Fri, Feb 06, 2015 at 08:53:04AM -0500, Thompson, Matt (GSFC-610.1)[SCIENCE SYSTEMS AND APPLICATIONS INC] wrote:
>> MVAPICH Discuss,
>>
>> I have an issue that is weirdly specific. When I try to build either
>> HDF5-1.8.12 or the latest stable HDF5-1.8.14 (with --enable-parallel) with
>> MVAPICH2 2.1rc1 on SLES 11 SP3, the HDF5 build fails with a segfault:
>>
>>> libtool: link: mpicc -std=c99 -O3 -fPIC -o H5make_libsettings H5make_libsettings.o  -L/discover/swdev/USER/Baselibs/TmpBaselibs/GMAO-Baselibs-4_0_6-FixHDF5/x86_64-unknown-linux-gnu/ifort/Linux/lib /discover/swdev/USER/Baselibs/TmpBaselibs/GMAO-Baselibs-4_0_6-FixHDF5/x86_64-unknown-linux-gnu/ifort/Linux/lib/libsz.a -lz -ldl -lm
>>> LD_LIBRARY_PATH="$LD_LIBRARY_PATH`echo -lm |                  \
>>> 		sed -e 's/-L/:/g' -e 's/ //g'`"                               \
>>> 	 ./H5make_libsettings > H5lib_settings.c  ||                               \
>>> 	    (test $HDF5_Make_Ignore && echo "*** Error ignored") ||          \
>>> 	    (rm -f H5lib_settings.c ; exit 1)
>>> /bin/sh: line 4:  1838 Segmentation fault      (core dumped) LD_LIBRARY_PATH="$LD_LIBRARY_PATH`echo -lm |                  	sed -e 's/-L/:/g' -e 's/ //g'`" ./H5make_libsettings > H5lib_settings.c
>>
>> I can be fairly certain with that specificity because I've tried the
>> following things (all with Intel 15.0.0.090):
>>
>>    MVAPICH2 2.1rc1     on SLES 11 SP1: Works
>>    MVAPICH2 2.1rc1     on SLES 11 SP3: FAIL
>>    Intel MPI 5.0.1.135 on SLES 11 SP1: Works
>>    Intel MPI 5.0.1.135 on SLES 11 SP3: Works
>>    MPT 2.11            on SLES 11 SP3: Works
>>
>> I've also tried without --enable-parallel:
>>
>>    No Parallel HDF5    on SLES 11 SP3: Works
>>
>> though in that case, the C compiler would be gcc not icc (since it's not
>> calling mpicc which points to icc).
>>
>> Other than that, everything else is the same in each environment.
>>
>> I also tried compiling with -O0 -g -traceback and got the same failure.
>> Looking at the core in gdb:
>>
>>> (gdb) backtrace
>>> #0  0x00002aaaabfe0802 in _int_free () from /lib64/libc.so.6
>>> #1  0x00002aaaabfe3b5c in free () from /lib64/libc.so.6
>>> #2  0x00002aaaaf70c35d in ?? () from /lib64/libnss_sss.so.2
>>> #3  0x00002aaaaf70c6f0 in ?? () from /lib64/libnss_sss.so.2
>>> #4  0x00002aaaaf70a275 in _nss_sss_getpwuid_r () from /lib64/libnss_sss.so.2
>>> #5  0x00002aaaac00fb2c in getpwuid_r@@GLIBC_2.2.5 () from /lib64/libc.so.6
>>> #6  0x00002aaaac00f37f in getpwuid () from /lib64/libc.so.6
>>> #7  0x0000000000401993 in print_header () at H5make_libsettings.c:185
>>> #8  0x0000000000401d3a in main () at H5make_libsettings.c:290
>>
>>  From this testing it seems like it isn't the compiler, it isn't *just* the
>> operating system, and it isn't *just* the MPI stack, but rather the
>> combination of MVAPICH2 2.1rc1 and SLES 11 SP3. This has cropped up because
>> part of the supercomputer I work on has transitioned to SLES 11 SP3. And in
>> attempting to rebuild some libraries to diagnose some issues, this came up.
>>
>> Now I have asked better computer engineers than I here to try to figure this
>> out as well, but I was wondering if anyone here might know why one would
>> fail while others succeed? That is, if you've seen something similar?
>>
>> Matt
>> --
>> Matt Thompson          SSAI, Sr Software Test Engr
>> NASA GSFC, Global Modeling and Assimilation Office
>> Code 610.1, 8800 Greenbelt Rd, Greenbelt, MD 20771
>> Phone: 301-614-6712              Fax: 301-614-6246
>> _______________________________________________
>> mvapich-discuss mailing list
>> mvapich-discuss at cse.ohio-state.edu
>> http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>


-- 
Matt Thompson          SSAI, Sr Software Test Engr
NASA GSFC, Global Modeling and Assimilation Office
Code 610.1, 8800 Greenbelt Rd, Greenbelt, MD 20771
Phone: 301-614-6712              Fax: 301-614-6246


More information about the mvapich-discuss mailing list