[mvapich-discuss] Failing to compile nemesis and sock channel
Subramoni, Hari
subramoni.1 at osu.edu
Mon Sep 2 09:44:45 EDT 2019
Dear. Georg.
Thanks for the report. We appreciate it. Since we are a derivative of MPICH, I will see how best to incorporate the suggestion regarding pthread (i.e. either to make it directly upstream to MPICH or locally).
We do have a continuous integration system like Jenkins internally. Most of our users use the ch3:mrail or ch3:psm channels. Thus, most of our efforts are spent there. We will try to ensure that the ch3:sock channel gets more testing if that is one that you are more interested in.
Best,
Hari.
-----Original Message-----
From: Georg Geiser <Georg.Geiser at dlr.de>
Sent: Monday, September 2, 2019 4:18 AM
To: Subramoni, Hari <subramoni.1 at osu.edu>
Cc: mvapich-discuss at cse.ohio-state.edu <mvapich-discuss at mailman.cse.ohio-state.edu>
Subject: Re: [mvapich-discuss] Failing to compile nemesis and sock channel
Hi Hari,
Thank you for fixing this issue. Now the sock channel compiles.
However, when adding "--enable-threads=funneled" I also have to add "LDFLAGS=-lpthread" otherwise the linker will fail. This should be checked and set by configure automatically. Note that for the GNU compiler it is also recommended to use -pthread (without an l!) in the compile step (cf.
https://stackoverflow.com/questions/23250863/difference-between-pthread-and-lpthread-while-compiling).
If you are not already doing so, you should consider to use a continuous integration system (e.g. Jenkins) to check all valid combinations for configuring MVAPICH.
Kind regards,
Georg
Am 30.08.19 um 17:48 schrieb Subramoni, Hari:
> Hi, Georg.
>
> Thanks for reporting the issue. We have fixed the issue and taken the code into MVAPICH2 with an acknowledgement to you.
>
> The ch3:nemesis is not officially supported anymore. Can you please try to apply the following patch to see if you're able to build the ch3:sock channel?
>
> We are looking at the other issues you have reported and will get back on the corresponding thread soon.
>
> Fix build issues with ch3:sock
> - Thanks to Georg Geiser <Georg.Geiser at dlr.de> for reporting
> the issue
>
> diff --git a/src/include/coll_shmem.h b/src/include/coll_shmem.h index
> 7cf2503..39ed439 100644
> --- a/src/include/coll_shmem.h
> +++ b/src/include/coll_shmem.h
> @@ -536,6 +536,7 @@ extern int MPIDI_CH3I_SHMEM_Helper_fn(MPIDI_PG_t * pg, int local_id, char **file
> char *prefix, int *fd, size_t file_size);
> #endif /* defined(CHANNEL_MRAIL_GEN2) || defined(CHANNEL_NEMESIS_IB)
> */
>
> +#if defined(CHANNEL_MRAIL_GEN2) || defined(CHANNEL_PSM)
> static inline int Cyclic_Rank_list_mapper(MPID_Comm * comm_ptr, int idx)
> {
> return comm_ptr->dev.ch.rank_list[idx]; @@ -545,6 +546,7 @@
> static inline int Bunch_Rank_list_mapper(MPID_Comm * comm_ptr, int idx)
> {
> return idx;
> };
> +#endif /* defined(CHANNEL_MRAIL_GEN2) || defined(CHANNEL_PSM) */
>
> MPIR_T_PVAR_ULONG2_COUNTER_DECL_EXTERN(MV2,
> mv2_num_shmem_coll_calls);
>
> diff --git a/src/mpi/init/init.c b/src/mpi/init/init.c index
> 5694430..e8134cf 100644
> --- a/src/mpi/init/init.c
> +++ b/src/mpi/init/init.c
> @@ -213,6 +213,7 @@ int MPI_Init( int *argc, char ***argv )
> }
> }
>
> +#if defined(CHANNEL_MRAIL_GEN2) || defined(CHANNEL_PSM)
> /* initialize the two level communicator for MPI_COMM_WORLD */
> if (mv2_use_osu_collectives &&
> mv2_enable_shmem_collectives) { @@ -237,6 +238,7 @@ int
> MPI_Init( int *argc, char ***argv )
> }
> }
> }
> +#endif /*defined(CHANNEL_MRAIL_GEN2) || defined(CHANNEL_PSM)*/
>
> /* ... end of body of routine ... */
> MPID_MPI_INIT_FUNC_EXIT(MPID_STATE_MPI_INIT);
> diff --git a/src/mpid/ch3/src/ch3u_handle_send_req.c
> b/src/mpid/ch3/src/ch3u_handle_send_req.c
> index 878dc54..e43af56 100644
> --- a/src/mpid/ch3/src/ch3u_handle_send_req.c
> +++ b/src/mpid/ch3/src/ch3u_handle_send_req.c
> @@ -30,9 +30,11 @@ int MPIDI_CH3U_Handle_send_req(MPIDI_VC_t * vc,
> MPID_Request * sreq, int *comple
>
> MPIDI_FUNC_ENTER(MPID_STATE_MPIDI_CH3U_HANDLE_SEND_REQ);
>
> +#if defined(CHANNEL_MRAIL)
> PRINT_DEBUG(DEBUG_SHM_verbose>1,
> "vc: %p, rank: %d, sreq: %p, type: %d, onDataAvail: %p\n",
> vc, vc->pg_rank, sreq, MPIDI_Request_get_type(sreq),
> sreq->dev.OnDataAvail);
> +#endif /*defined(CHANNEL_MRAIL)*/
>
> /* Use the associated function rather than switching on the old ca field */
> /* Routines can call the attached function directly */
>
> Best,
> Hari.
>
> -----Original Message-----
> From: mvapich-discuss-bounces at cse.ohio-state.edu
> <mvapich-discuss-bounces at mailman.cse.ohio-state.edu> On Behalf Of
> Georg Geiser
> Sent: Friday, August 30, 2019 10:59 AM
> To: mvapich-discuss at cse.ohio-state.edu
> <mvapich-discuss at mailman.cse.ohio-state.edu>
> Subject: [mvapich-discuss] Failing to compile nemesis and sock channel
>
> Some channels of MVAPICH2 2.3.2 fail to compile on my system. The compiler error message are always like this:
>
> CC src/mpi/coll/lib_libmpi_la-allreduce.lo
> In file included from src/mpi/coll/allreduce.c:23:
> ./src/include/coll_shmem.h: In function ‘Cyclic_Rank_list_mapper’:
> ./src/include/coll_shmem.h:541:28: error: ‘MPIDI_CH3I_CH_comm_t’ {aka ‘struct <anonymous>’} has no member named ‘rank_list’
> return comm_ptr->dev.ch.rank_list[idx];
>
> Failing builds include ch3:nemesis and ch3:sock, while ch3:mrail builds successfully. I could not check ch3:psm due to missing headers.
>
> I use a Debian Buster system with GCC 8.3.0. I specified no additional configure flags.
>
>
> Georg
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
--
Dr.-Ing. Georg Geiser
Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR) Institut für Antriebstechnik | Numerische Methoden | Linder Höhe | 51147 Köln Telefon +49 2203 601-3718 | E-Mail Georg.Geiser at dlr.de
More information about the mvapich-discuss
mailing list