[mvapich-discuss] (no subject)

Chaitra Kumar chaitragkumar at gmail.com
Sun Sep 6 22:01:47 EDT 2015


Hi Mingzhe,

Thanks for your reply.

I tried to run application with MPICH_CTXID_EAGER_SIZE=1 and
MV2_MAX_NUM_WIN=500 still the error persists.

The current requirement is to create 449 windows.

Regards,
Chaitra

On Sun, Sep 6, 2015 at 9:39 PM, Mingzhe Li <li.2192 at osu.edu> wrote:

> Hi Chaitra,
>
> Thanks for your note. Could you please try setting the runtime parameter MPICH_CTXID_EAGER_SIZE
> to 1 to see if it helps? What will be the total number of windows you need
> for your program?
>
> Regards,
> Mingzhe
>
> On Sun, Sep 6, 2015 at 7:39 AM, Chaitra Kumar <chaitragkumar at gmail.com>
> wrote:
>
>> --===============1222879463456806637==
>> Content-Type: multipart/alternative;
>> boundary="001a1132f6623d5ce4051f129705"
>>
>> --001a1132f6623d5ce4051f129705
>> Content-Type: text/plain; charset="UTF-8"
>>
>> Hi Team,
>>
>> We have two applications with similar requirement of increasing the number
>> of RMA windows.  For both applications we are getting same error.
>>
>> Is there any upper limit on the number of RMA windows which can be created
>> by an MPI program?
>> Is there any other way to increase the number of RMA windows?
>>
>> Any help in increasing the number of RMA windows is greatly appreciated.
>>
>> Regards,
>> Chaitra
>>
>>
>>
>> On Fri, Sep 4, 2015 at 12:11 PM, Chaitra Kumar <chaitragkumar at gmail.com>
>> wrote:
>>
>> > Hi Team,
>> >
>> > We have a requirement to create more than 500 RMA windows.  We are using
>> > the variable: MV2_MAX_NUM_WIN to increase the number of RMA windows
>> > during MPI program launch.  But we are getting error even after setting
>> the
>> > variable.
>> >
>> > Is there any upper limit on the number of RMA windows which can be
>> created
>> > by an MPI program?
>> > Is there any other way to increase the number of RMA windows?
>> >
>> > The error we are getting is:
>> >
>> > [cli_383]: aborting job:
>> >
>> > Fatal error in MPI_Win_create:
>> >
>> > Other MPI error, error stack:
>> >
>> > MPI_Win_create(189)..................:
>> MPI_Win_create(base=0x7f8095c3a010,
>> > size=1073741824, disp_unit=1, info=0x9c000000, comm=0xc4001344,
>> > win=0x2611a38) failed
>> >
>> > MPID_Win_create(95)..................:
>> >
>> > win_init(281)........................:
>> >
>> > MPIR_Comm_dup_impl(71)...............:
>> >
>> > MPIR_Comm_copy(1651).................:
>> >
>> > MPIR_Get_contextid(878)..............:
>> >
>> > MPIR_Get_contextid_sparse_group(1242):  Cannot allocate context ID
>> because
>> > of fragmentation (279/2048 free on this process; ignore_id=0)
>> >
>> >
>> >
>> > [cli_434]: aborting job:
>> >
>> > Fatal error in MPI_Win_create:
>> >
>> > Other MPI error, error stack:
>> >
>> > MPI_Win_create(189)..................:
>> MPI_Win_create(base=0x7fd6c9c87010,
>> > size=1073741824, disp_unit=1, info=0x9c000000, comm=0xc400111d,
>> > win=0x24caa38) failed
>> >
>> > MPID_Win_create(95)..................:
>> >
>> > win_init(281)........................:
>> >
>> > MPIR_Comm_dup_impl(71)...............:
>> >
>> > MPIR_Comm_copy(1651).................:
>> >
>> > MPIR_Get_contextid(878)..............:
>> >
>> > MPIR_Get_contextid_sparse_group(1242):  Cannot allocate context ID
>> because
>> > of fragmentation (344/2048 free on this process; ignore_id=0)
>> >
>> >
>> > Thanks for your help.
>> >
>> >
>> >
>> > Regards,
>> >
>> > Chaitra
>> >
>> >
>>
>> --001a1132f6623d5ce4051f129705
>> Content-Type: text/html; charset="UTF-8"
>> Content-Transfer-Encoding: quoted-printable
>>
>> <div dir=3D"ltr"><div><div><div><div>Hi Team,<br><br></div>We have two
>> appl=
>> ications with similar requirement of increasing the number of RMA
>> windows.=
>> =C2=A0 For both applications we are getting same error.<br><br>Is there
>> any=
>>  upper limit on the number of RMA windows which can be created by an MPI
>> pr=
>> ogram?<br>Is there any other way to increase the number of RMA windows?
>> <br=
>> ><br></div>Any help in increasing the number of RMA windows is greatly
>> appr=
>>
>> eciated.<br><br></div>Regards,<br></div>Chaitra<br><div><div><br><div><br><=
>> /div></div></div></div><div class=3D"gmail_extra"><br><div
>> class=3D"gmail_q=
>> uote">On Fri, Sep 4, 2015 at 12:11 PM, Chaitra Kumar <span
>> dir=3D"ltr"><=
>> <a href=3D"mailto:chaitragkumar at gmail.com"
>> target=3D"_blank">chaitragkumar@=
>> gmail.com</a>></span> wrote:<br><blockquote class=3D"gmail_quote"
>> style=
>> =3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div
>> dir=
>> =3D"ltr"><div><div><div><div><div>Hi Team,<br><br></div>We have a
>> requireme=
>> nt to create more than 500 RMA windows.=C2=A0 We are using the variable:
>> <s=
>> pan
>> style=3D"font-size:11pt;font-family:"Calibri",sans-serif;colo=
>> r:rgb(31,73,125)">MV2_MAX_NUM_WIN</span> to increase the number of RMA
>> wind=
>> ows during MPI program launch.=C2=A0 But we are getting error even after
>> se=
>> tting the variable.<br></div><br></div>Is there any upper limit on the
>> numb=
>> er of RMA windows which can be created by an MPI program?<br></div>Is
>> there=
>>  any other way to increase the number of RMA windows? <br><br></div>The
>> err=
>> or we are getting is:<br><br>
>>
>> <p class=3D"MsoNormal">[cli_383]: aborting job:</p>
>>
>> <p class=3D"MsoNormal">Fatal error in MPI_Win_create:</p>
>>
>> <p class=3D"MsoNormal">Other MPI error, error stack:</p>
>>
>> <p class=3D"MsoNormal">MPI_Win_create(189)..................:
>> MPI_Win_create(base=3D0x7f8095c3a010, size=3D1073741824, disp_unit=3D1,
>> info=3D0x9c000000, comm=3D0xc4001344, win=3D0x2611a38) failed</p>
>>
>> <p class=3D"MsoNormal">MPID_Win_create(95)..................:</p>
>>
>> <p class=3D"MsoNormal">win_init(281)........................:</p>
>>
>> <p class=3D"MsoNormal">MPIR_Comm_dup_impl(71)...............:</p>
>>
>> <p class=3D"MsoNormal">MPIR_Comm_copy(1651).................:</p>
>>
>> <p class=3D"MsoNormal">MPIR_Get_contextid(878)..............:</p>
>>
>> <p class=3D"MsoNormal">MPIR_Get_contextid_sparse_group(1242):
>> =C2=A0Cannot =
>> allocate
>> context ID because of fragmentation (279/2048 free on this process;
>> ignore_id=3D0)</p>
>>
>> <p class=3D"MsoNormal">=C2=A0</p>
>>
>> <p class=3D"MsoNormal">[cli_434]: aborting job:</p>
>>
>> <p class=3D"MsoNormal">Fatal error in MPI_Win_create:</p>
>>
>> <p class=3D"MsoNormal">Other MPI error, error stack:</p>
>>
>> <p class=3D"MsoNormal">MPI_Win_create(189)..................:
>> MPI_Win_creat=
>> e(base=3D0x7fd6c9c87010,
>> size=3D1073741824, disp_unit=3D1, info=3D0x9c000000, comm=3D0xc400111d,
>> win=
>> =3D0x24caa38)
>> failed</p>
>>
>> <p class=3D"MsoNormal">MPID_Win_create(95)..................:</p>
>>
>> <p class=3D"MsoNormal">win_init(281)........................:</p>
>>
>> <p class=3D"MsoNormal">MPIR_Comm_dup_impl(71)...............:</p>
>>
>> <p class=3D"MsoNormal">MPIR_Comm_copy(1651).................:</p>
>>
>> <p class=3D"MsoNormal">MPIR_Get_contextid(878)..............:</p>
>>
>> <p class=3D"MsoNormal">MPIR_Get_contextid_sparse_group(1242):
>> =C2=A0Cannot =
>> allocate
>> context ID because of fragmentation (344/2048 free on this process;
>> ignore_id=3D0)</p><p class=3D"MsoNormal"><br></p><p
>> class=3D"MsoNormal">Tha=
>> nks for your help.</p><p class=3D"MsoNormal"><br></p><p
>> class=3D"MsoNormal"=
>> ><br></p><p class=3D"MsoNormal">Regards,</p><p
>> class=3D"MsoNormal">Chaitra<=
>> br></p>
>>
>> <br></div>
>> </blockquote></div><br></div>
>>
>> --001a1132f6623d5ce4051f129705--
>>
>> --===============1222879463456806637==
>> Content-Type: text/plain; charset="us-ascii"
>> MIME-Version: 1.0
>> Content-Transfer-Encoding: 7bit
>> Content-Disposition: inline
>>
>> _______________________________________________
>> mvapich-discuss mailing list
>> mvapich-discuss at cse.ohio-state.edu
>> http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>>
>> --===============1222879463456806637==--
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20150907/c0b5470f/attachment-0001.html>


More information about the mvapich-discuss mailing list